Re: [Discussion] KIP-473: Add support for sasl login authentication callback handler to KafkaLog4jAppender

2019-05-24 Thread Manikumar
Thanks for the KIP. LGTM.

On Fri, May 24, 2019 at 11:34 PM r...@confluent.io 
wrote:

> KIP-425 adds SASL support to the KafkaLog4jAppender however it does not
> include the credential handling improvements added with KIP-86. This is a
> proposal is to expose `sasl.login.callback.handler.class` for use by the
> underlying producer.
>
> KIP-473
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-473%3A+Enable+KafkaLog4JAppender+to+use+SASL+Authentication+Callback+Handlers
>
> KIP-425
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-425%3A+Add+some+Log4J+Kafka+Appender+Properties+for+Producing+to+Secured+Brokers
>
> KIP-86
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-86%3A+Configurable+SASL+callback+handlers
>
>


[jira] [Resolved] (KAFKA-8422) Client should not use old versions of OffsetsForLeaderEpoch

2019-05-24 Thread Ismael Juma (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-8422.

Resolution: Fixed

> Client should not use old versions of OffsetsForLeaderEpoch
> ---
>
> Key: KAFKA-8422
> URL: https://issues.apache.org/jira/browse/KAFKA-8422
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 2.3.0
>
>
> For KIP-320, we changed the permissions of the OffsetsForLeaderEpoch to be 
> topic-level so that consumers did not require Cluster permission. However, 
> there is no way for a consumer to know whether the broker is new enough to 
> support this permission scheme. The only way to be sure is to use the version 
> of this API that was bumped in 2.3. For older versions, we should revert to 
> the old behavior.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk8 #3675

2019-05-24 Thread Apache Jenkins Server
See 


Changes:

[cmccabe] KAFKA-8341. Retry Consumer group operation for NOT_COORDINATOR error

--
[...truncated 2.46 MB...]

org.apache.kafka.connect.data.ConnectSchemaTest > testFieldsOnStructSchema 
STARTED

org.apache.kafka.connect.data.ConnectSchemaTest > testFieldsOnStructSchema 
PASSED

org.apache.kafka.connect.data.ConnectSchemaTest > 
testValidateValueMismatchArray STARTED

org.apache.kafka.connect.data.ConnectSchemaTest > 
testValidateValueMismatchArray PASSED

org.apache.kafka.connect.data.ConnectSchemaTest > 
testValidateValueMismatchBytes STARTED

org.apache.kafka.connect.data.ConnectSchemaTest > 
testValidateValueMismatchBytes PASSED

org.apache.kafka.connect.data.ConnectSchemaTest > 
testValidateValueMismatchFloat STARTED

org.apache.kafka.connect.data.ConnectSchemaTest > 
testValidateValueMismatchFloat PASSED

org.apache.kafka.connect.data.ConnectSchemaTest > 
testValidateValueMismatchInt16 STARTED

org.apache.kafka.connect.data.ConnectSchemaTest > 
testValidateValueMismatchInt16 PASSED

org.apache.kafka.connect.data.ConnectSchemaTest > 
testValidateValueMismatchInt32 STARTED

org.apache.kafka.connect.data.ConnectSchemaTest > 
testValidateValueMismatchInt32 PASSED

org.apache.kafka.connect.data.ConnectSchemaTest > 
testValidateValueMismatchInt64 STARTED

org.apache.kafka.connect.data.ConnectSchemaTest > 
testValidateValueMismatchInt64 PASSED

org.apache.kafka.connect.data.ConnectSchemaTest > testMapEquality STARTED

org.apache.kafka.connect.data.ConnectSchemaTest > testMapEquality PASSED

org.apache.kafka.connect.data.ConnectSchemaTest > testPrimitiveEquality STARTED

org.apache.kafka.connect.data.ConnectSchemaTest > testPrimitiveEquality PASSED

org.apache.kafka.connect.data.ConnectSchemaTest > 
testValidateValueMismatchTimestamp STARTED

org.apache.kafka.connect.data.ConnectSchemaTest > 
testValidateValueMismatchTimestamp PASSED

org.apache.kafka.connect.data.ConnectSchemaTest > 
testValidateValueMismatchStructWrongNestedSchema STARTED

org.apache.kafka.connect.data.ConnectSchemaTest > 
testValidateValueMismatchStructWrongNestedSchema PASSED

org.apache.kafka.connect.data.ConnectSchemaTest > 
testValidateValueMismatchDecimal STARTED

org.apache.kafka.connect.data.ConnectSchemaTest > 
testValidateValueMismatchDecimal PASSED

org.apache.kafka.connect.data.ConnectSchemaTest > testArrayDefaultValueEquality 
STARTED

org.apache.kafka.connect.data.ConnectSchemaTest > testArrayDefaultValueEquality 
PASSED

org.apache.kafka.connect.data.ConnectSchemaTest > testEmptyStruct STARTED

org.apache.kafka.connect.data.ConnectSchemaTest > testEmptyStruct PASSED

org.apache.kafka.connect.data.ConnectSchemaTest > 
testValidateValueMismatchMapSomeKeys STARTED

org.apache.kafka.connect.data.ConnectSchemaTest > 
testValidateValueMismatchMapSomeKeys PASSED

org.apache.kafka.connect.data.ConnectSchemaTest > 
testValidateValueMismatchArraySomeMatch STARTED

org.apache.kafka.connect.data.ConnectSchemaTest > 
testValidateValueMismatchArraySomeMatch PASSED

org.apache.kafka.connect.data.ConnectSchemaTest > testValidateValueMismatchDate 
STARTED

org.apache.kafka.connect.data.ConnectSchemaTest > testValidateValueMismatchDate 
PASSED

org.apache.kafka.connect.data.ConnectSchemaTest > testValidateValueMismatchInt8 
STARTED

org.apache.kafka.connect.data.ConnectSchemaTest > testValidateValueMismatchInt8 
PASSED

org.apache.kafka.connect.data.ConnectSchemaTest > testValidateValueMismatchTime 
STARTED

org.apache.kafka.connect.data.ConnectSchemaTest > testValidateValueMismatchTime 
PASSED

org.apache.kafka.connect.data.ConnectSchemaTest > 
testValidateValueMismatchDouble STARTED

org.apache.kafka.connect.data.ConnectSchemaTest > 
testValidateValueMismatchDouble PASSED

org.apache.kafka.connect.data.ConnectSchemaTest > 
testValidateValueMismatchMapKey STARTED

org.apache.kafka.connect.data.ConnectSchemaTest > 
testValidateValueMismatchMapKey PASSED

org.apache.kafka.connect.data.ConnectSchemaTest > 
testValidateValueMismatchString STARTED

org.apache.kafka.connect.data.ConnectSchemaTest > 
testValidateValueMismatchString PASSED

org.apache.kafka.connect.data.ConnectSchemaTest > testValidateValueMatchingType 
STARTED

org.apache.kafka.connect.data.ConnectSchemaTest > testValidateValueMatchingType 
PASSED

org.apache.kafka.connect.data.ConnectSchemaTest > testArrayEquality STARTED

org.apache.kafka.connect.data.ConnectSchemaTest > testArrayEquality PASSED

org.apache.kafka.connect.data.ConnectSchemaTest > testFieldsOnlyValidForStructs 
STARTED

org.apache.kafka.connect.data.ConnectSchemaTest > testFieldsOnlyValidForStructs 
PASSED

org.apache.kafka.connect.data.ConnectSchemaTest > 
testValidateValueMatchingLogicalType STARTED

org.apache.kafka.connect.data.ConnectSchemaTest > 
testValidateValueMatchingLogicalType PASSED

org.apache.kafka.connect.data.StructTest > 

Jenkins build is back to normal : kafka-trunk-jdk11 #572

2019-05-24 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-8430) Unit test to make sure `group.id` and `group.instance.id` won't affect each other

2019-05-24 Thread Boyang Chen (JIRA)
Boyang Chen created KAFKA-8430:
--

 Summary: Unit test to make sure `group.id` and `group.instance.id` 
won't affect each other
 Key: KAFKA-8430
 URL: https://issues.apache.org/jira/browse/KAFKA-8430
 Project: Kafka
  Issue Type: Sub-task
Reporter: Boyang Chen






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-8341) AdminClient should retry coordinator lookup after NOT_COORDINATOR error

2019-05-24 Thread Vikas Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikas Singh resolved KAFKA-8341.

Resolution: Fixed

fixed in commit 46a02f3231cd6d340c622636159b9f59b4b3cb6e

> AdminClient should retry coordinator lookup after NOT_COORDINATOR error
> ---
>
> Key: KAFKA-8341
> URL: https://issues.apache.org/jira/browse/KAFKA-8341
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Vikas Singh
>Priority: Major
>
> If a group operation (e.g. DescribeGroup) fails because the coordinator has 
> moved, the AdminClient should lookup the coordinator before retrying the 
> operation. Currently we will either fail or just retry anyway. This is 
> similar in some ways to controller rediscovery after getting NOT_CONTROLLER 
> errors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-2.3-jdk8 #14

2019-05-24 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] MINOR: Updated configuration docs with RocksDBConfigSetter#close 
(#6784)

[matthias] MINOR: add Kafka Streams upgrade notes for 2.3 release (#6758)

--
[...truncated 2.92 MB...]
kafka.zk.KafkaZkClientTest > testSetAndGetConsumerOffset PASSED

kafka.zk.KafkaZkClientTest > testClusterIdMethods STARTED

kafka.zk.KafkaZkClientTest > testClusterIdMethods PASSED

kafka.zk.KafkaZkClientTest > testEntityConfigManagementMethods STARTED

kafka.zk.KafkaZkClientTest > testEntityConfigManagementMethods PASSED

kafka.zk.KafkaZkClientTest > testUpdateLeaderAndIsr STARTED

kafka.zk.KafkaZkClientTest > testUpdateLeaderAndIsr PASSED

kafka.zk.KafkaZkClientTest > testUpdateBrokerInfo STARTED

kafka.zk.KafkaZkClientTest > testUpdateBrokerInfo PASSED

kafka.zk.KafkaZkClientTest > testCreateRecursive STARTED

kafka.zk.KafkaZkClientTest > testCreateRecursive PASSED

kafka.zk.KafkaZkClientTest > testGetConsumerOffsetNoData STARTED

kafka.zk.KafkaZkClientTest > testGetConsumerOffsetNoData PASSED

kafka.zk.KafkaZkClientTest > testDeleteTopicPathMethods STARTED

kafka.zk.KafkaZkClientTest > testDeleteTopicPathMethods PASSED

kafka.zk.KafkaZkClientTest > testSetTopicPartitionStatesRaw STARTED

kafka.zk.KafkaZkClientTest > testSetTopicPartitionStatesRaw PASSED

kafka.zk.KafkaZkClientTest > testAclManagementMethods STARTED

kafka.zk.KafkaZkClientTest > testAclManagementMethods PASSED

kafka.zk.KafkaZkClientTest > testPreferredReplicaElectionMethods STARTED

kafka.zk.KafkaZkClientTest > testPreferredReplicaElectionMethods PASSED

kafka.zk.KafkaZkClientTest > testPropagateLogDir STARTED

kafka.zk.KafkaZkClientTest > testPropagateLogDir PASSED

kafka.zk.KafkaZkClientTest > testGetDataAndStat STARTED

kafka.zk.KafkaZkClientTest > testGetDataAndStat PASSED

kafka.zk.KafkaZkClientTest > testReassignPartitionsInProgress STARTED

kafka.zk.KafkaZkClientTest > testReassignPartitionsInProgress PASSED

kafka.zk.KafkaZkClientTest > testCreateTopLevelPaths STARTED

kafka.zk.KafkaZkClientTest > testCreateTopLevelPaths PASSED

kafka.zk.KafkaZkClientTest > testIsrChangeNotificationGetters STARTED

kafka.zk.KafkaZkClientTest > testIsrChangeNotificationGetters PASSED

kafka.zk.KafkaZkClientTest > testLogDirEventNotificationsDeletion STARTED

kafka.zk.KafkaZkClientTest > testLogDirEventNotificationsDeletion PASSED

kafka.zk.KafkaZkClientTest > testGetLogConfigs STARTED

kafka.zk.KafkaZkClientTest > testGetLogConfigs PASSED

kafka.zk.KafkaZkClientTest > testBrokerSequenceIdMethods STARTED

kafka.zk.KafkaZkClientTest > testBrokerSequenceIdMethods PASSED

kafka.zk.KafkaZkClientTest > testAclMethods STARTED

kafka.zk.KafkaZkClientTest > testAclMethods PASSED

kafka.zk.KafkaZkClientTest > testCreateSequentialPersistentPath STARTED

kafka.zk.KafkaZkClientTest > testCreateSequentialPersistentPath PASSED

kafka.zk.KafkaZkClientTest > testConditionalUpdatePath STARTED

kafka.zk.KafkaZkClientTest > testConditionalUpdatePath PASSED

kafka.zk.KafkaZkClientTest > testDeleteTopicZNode STARTED

kafka.zk.KafkaZkClientTest > testDeleteTopicZNode PASSED

kafka.zk.KafkaZkClientTest > testDeletePath STARTED

kafka.zk.KafkaZkClientTest > testDeletePath PASSED

kafka.zk.KafkaZkClientTest > testGetBrokerMethods STARTED

kafka.zk.KafkaZkClientTest > testGetBrokerMethods PASSED

kafka.zk.KafkaZkClientTest > testCreateTokenChangeNotification STARTED

kafka.zk.KafkaZkClientTest > testCreateTokenChangeNotification PASSED

kafka.zk.KafkaZkClientTest > testGetTopicsAndPartitions STARTED

kafka.zk.KafkaZkClientTest > testGetTopicsAndPartitions PASSED

kafka.zk.KafkaZkClientTest > testRegisterBrokerInfo STARTED

kafka.zk.KafkaZkClientTest > testRegisterBrokerInfo PASSED

kafka.zk.KafkaZkClientTest > testRetryRegisterBrokerInfo STARTED

kafka.zk.KafkaZkClientTest > testRetryRegisterBrokerInfo PASSED

kafka.zk.KafkaZkClientTest > testConsumerOffsetPath STARTED

kafka.zk.KafkaZkClientTest > testConsumerOffsetPath PASSED

kafka.zk.KafkaZkClientTest > testDeleteRecursiveWithControllerEpochVersionCheck 
STARTED

kafka.zk.KafkaZkClientTest > testDeleteRecursiveWithControllerEpochVersionCheck 
PASSED

kafka.zk.KafkaZkClientTest > testControllerManagementMethods STARTED

kafka.zk.KafkaZkClientTest > testControllerManagementMethods PASSED

kafka.zk.KafkaZkClientTest > testTopicAssignmentMethods STARTED

kafka.zk.KafkaZkClientTest > testTopicAssignmentMethods PASSED

kafka.zk.KafkaZkClientTest > testPropagateIsrChanges STARTED

kafka.zk.KafkaZkClientTest > testPropagateIsrChanges PASSED

kafka.zk.KafkaZkClientTest > testControllerEpochMethods STARTED

kafka.zk.KafkaZkClientTest > testControllerEpochMethods PASSED

kafka.zk.KafkaZkClientTest > testDeleteRecursive STARTED

kafka.zk.KafkaZkClientTest > testDeleteRecursive PASSED

kafka.zk.KafkaZkClientTest > testGetTopicPartitionStates STARTED

kafka.zk.KafkaZkClientTest > testGetTopicPartitionStates 

Re: [VOTE] KIP-430 - Return Authorized Operations in Describe Responses

2019-05-24 Thread Colin McCabe
Hi all,

We discovered an issue with how compatibility was handled in this KIP.  There 
should always be a way for the client to know that the information that was 
requested was omitted by the broker because it was too old.  I changed the 
"Compatibility, Deprecation, and Migration Plan" to add some information about 
how this should be done.

Specifically, I added this section:

 > When the AdminClient is talking to a broker which does not support 
 > KIP-430, it will fill in either null or UnsupportedVersionException for 
 > the returned ACL operations fields in objects.  For example, 
 > ConsumerGroupDescription#authorizedOperations will be null if the broker 
 > did not supply this information.  
 > DescribeClusterResult#authorizedOperations will throw an 
 > UnsupportedVersionException if the broker did not supply this information.

Also see https://github.com/apache/kafka/pull/6812/files

best,
Colin

On Wed, Feb 27, 2019, at 05:29, Rajini Sivaram wrote:
> The vote has passed with 5 binding votes (Harsha, Gwen, Manikumar, Colin,
> me) and 2 non-binding votes (Satish, Mickael). I will update the KIP page.
> 
> Many thanks to everyone for the feedback and votes.
> 
> Regards,
> 
> Rajini
> 
> 
> On Tue, Feb 26, 2019 at 6:41 PM Rajini Sivaram 
> wrote:
> 
> > Thanks Colin, I have updated the KIP to mention that we don't return
> > UNKNOWN, ANY or ALL.
> >
> > Regards,
> >
> > Rajini
> >
> > On Tue, Feb 26, 2019 at 6:10 PM Colin McCabe  wrote:
> >
> >> Thanks, Rajini!  Just to make it clear, can we spell out that we don't
> >> set the UNKNOWN, ANY, or ALL bits ever?
> >>
> >> +1 once that's resolved.
> >>
> >> cheers,
> >> Colin
> >>
> >>
> >> On Mon, Feb 25, 2019, at 04:11, Rajini Sivaram wrote:
> >> > Hi Colin,
> >> >
> >> > Yes, it makes sense to reduce response size by using bit fields. Updated
> >> > the KIP.
> >> >
> >> > I have also updated the KIP to say that clients will ignore any bits
> >> set by
> >> > the broker that are unknown to the client, so there will be no UNKNOWN
> >> > operations in the set returned by AdminClient. Brokers may however set
> >> bits
> >> > regardless of client version. Does that match your expectation?
> >> >
> >> > Thank you,
> >> >
> >> > Rajini
> >> >
> >> >
> >> > On Sat, Feb 23, 2019 at 1:03 AM Colin McCabe 
> >> wrote:
> >> >
> >> > > Hi Rajini,
> >> > >
> >> > > Thanks for the explanations.
> >> > >
> >> > > On Fri, Feb 22, 2019, at 11:59, Rajini Sivaram wrote:
> >> > > > Hi Colin,
> >> > > >
> >> > > > Thanks for the review. Sorry I meant that an array of INT8's, each
> >> of
> >> > > which
> >> > > > is an AclOperation code will be returned. I have clarified that in
> >> the
> >> > > KIP.
> >> > >
> >> > > Do you think it's worth considering a bitfield here still?  An array
> >> will
> >> > > take up at least 4 bytes for the length, plus whatever length the
> >> elements
> >> > > are.  A 32-bit bitfield would pretty much always take up less space.
> >> And
> >> > > we can have a new version of the RPC with 64 bits or whatever if we
> >> outgrow
> >> > > 32 operations.  MetadataResponse for a big cluster could contain
> >> quite a
> >> > > lot of topics, tens or hundreds of thousands.  So the space savings
> >> could
> >> > > be considerable.
> >> > >
> >> > > >
> >> > > > All permitted operations will be returned from the set of supported
> >> > > > operations on each resource. This is regardless of whether the
> >> access was
> >> > > > implicitly or explicitly granted. Have clarified that in the KIP.
> >> > >
> >> > > Thanks.
> >> > >
> >> > > >
> >> > > > Since the values returned are INT8 codes, clients can simply ignore
> >> any
> >> > > > they don't recognize. Java clients convert these into
> >> > > AclOperation.UNKNOWN.
> >> > > > That way we don't need to update Metadata/describe request versions
> >> when
> >> > > > new operations are added to a resource. This is consistent with
> >> > > > DescribeAcls behaviour. Have added this to the compatibility
> >> section of
> >> > > the
> >> > > > KIP.
> >> > >
> >> > > Displaying "unknown" for new AclOperations made sense for
> >> DescribeAcls,
> >> > > since the ACL is explicitly referencing the new AclOperation.   For
> >> > > example, if you upgrade your Kafka cluster to a new version that
> >> supports
> >> > > DESCRIBE_CONFIGS, your old ACLs still don't reference
> >> DESCRIBE_CONFIGS.
> >> > >
> >> > > In contrast, in the case here, existing topics (or other resources)
> >> might
> >> > > pick up the new ACLOperation just by upgrading Kafka.  For example,
> >> if you
> >> > > had ALL permission on a topic and you upgrade to a new version with
> >> > > DESCRIBE_CONFIGS, you now have DESCRIBE_CONFIGS permission on that
> >> topic.
> >> > > This would result in a lot of "unknowns" being displayed here, which
> >> might
> >> > > not be ideal.
> >> > >
> >> > > Also, there is an argument from intent-- the intention here is to let
> >> you
> >> > > know what you can do with a resource that already 

[jira] [Created] (KAFKA-8429) Consumer should handle offset change while OffsetForLeaderEpoch is inflight

2019-05-24 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-8429:
--

 Summary: Consumer should handle offset change while 
OffsetForLeaderEpoch is inflight
 Key: KAFKA-8429
 URL: https://issues.apache.org/jira/browse/KAFKA-8429
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Gustafson
Assignee: Jason Gustafson
 Fix For: 2.3.0


It is possible for the offset of a partition to be changed while we are in the 
middle of validation. If the OffsetForLeaderEpoch request is in-flight and the 
offset changes, we need to redo the validation after it returns.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSSION] KIP-418: A method-chaining way to branch KStream

2019-05-24 Thread Michael Drogalis
Matthias: I think that's pretty reasonable from my point of view. Good
suggestion.

On Thu, May 23, 2019 at 9:50 PM Matthias J. Sax 
wrote:

> Interesting discussion.
>
> I am wondering, if we cannot unify the advantage of both approaches:
>
>
>
> KStream#split() -> KBranchedStream
>
> // branch is not easily accessible in current scope
> KBranchedStream#branch(Predicate, Consumer)
>   -> KBranchedStream
>
> // assign a name to the branch and
> // return the sub-stream to the current scope later
> //
> // can be simple as `#branch(p, s->s, "name")`
> // or also complex as `#branch(p, s->s.filter(...), "name")`
> KBranchedStream#branch(Predicate, Function, String)
>   -> KBranchedStream
>
> // default branch is not easily accessible
> // return map of all named sub-stream into current scope
> KBranchedStream#default(Cosumer)
>   -> Map
>
> // assign custom name to default-branch
> // return map of all named sub-stream into current scope
> KBranchedStream#default(Function, String)
>   -> Map
>
> // assign a default name for default
> // return map of all named sub-stream into current scope
> KBranchedStream#defaultBranch(Function)
>   -> Map
>
> // return map of all names sub-stream into current scope
> KBranchedStream#noDefaultBranch()
>   -> Map
>
>
>
> Hence, for each sub-stream, the user can pick to add a name and return
> the branch "result" to the calling scope or not. The implementation can
> also check at runtime that all returned names are unique. The returned
> Map can be empty and it's also optional to use the Map.
>
> To me, it seems like a good way to get best of both worlds.
>
> Thoughts?
>
>
>
> -Matthias
>
>
>
>
> On 5/6/19 5:15 PM, John Roesler wrote:
> > Ivan,
> >
> > That's a very good point about the "start" operator in the dynamic case.
> > I had no problem with "split()"; I was just questioning the necessity.
> > Since you've provided a proof of necessity, I'm in favor of the
> > "split()" start operator. Thanks!
> >
> > Separately, I'm interested to see where the present discussion leads.
> > I've written enough Javascript code in my life to be suspicious of
> > nested closures. You have a good point about using method references (or
> > indeed function literals also work). It should be validating that this
> > was also the JS community's first approach to flattening the logic when
> > their nested closure situation got out of hand. Unfortunately, it's
> > replacing nesting with redirection, both of which disrupt code
> > readability (but in different ways for different reasons). In other
> > words, I agree that function references is *the* first-order solution if
> > the nested code does indeed become a problem.
> >
> > However, the history of JS also tells us that function references aren't
> > the end of the story either, and you can see that by observing that
> > there have been two follow-on eras, as they continue trying to cope with
> > the consequences of living in such a callback-heavy language. First, you
> > have Futures/Promises, which essentially let you convert nested code to
> > method-chained code (Observables/FP is a popular variation on this).
> > Most lately, you have async/await, which is an effort to apply language
> > (not just API) syntax to the problem, and offer the "flattest" possible
> > programming style to solve the problem (because you get back to just one
> > code block per functional unit).
> >
> > Stream-processing is a different domain, and Java+KStreams is nowhere
> > near as callback heavy as JS, so I don't think we have to take the JS
> > story for granted, but then again, I think we can derive some valuable
> > lessons by looking sideways to adjacent domains. I'm just bringing this
> > up to inspire further/deeper discussion. At the same time, just like JS,
> > we can afford to take an iterative approach to the problem.
> >
> > Separately again, I'm interested in the post-branch merge (and I'd also
> > add join) problem that Paul brought up. We can clearly punt on it, by
> > terminating the nested branches with sink operators. But is there a DSL
> > way to do it?
> >
> > Thanks again for your driving this,
> > -John
> >
> > On Thu, May 2, 2019 at 7:39 PM Paul Whalen  > > wrote:
> >
> > Ivan, I’ll definitely forfeit my point on the clumsiness of the
> > branch(predicate, consumer) solution, I don’t see any real drawbacks
> > for the dynamic case.
> >
> > IMO the one trade off to consider at this point is the scope
> > question. I don’t know if I totally agree that “we rarely need them
> > in the same scope” since merging the branches back together later
> > seems like a perfectly plausible use case that can be a lot nicer
> > when the branched streams are in the same scope. That being said,
> > for the reasons Ivan listed, I think it is overall the better
> > solution - working around the scope thing is easy enough if you need
> > to.
> >
> > > On May 2, 2019, at 7:00 PM, 

Build failed in Jenkins: kafka-trunk-jdk11 #571

2019-05-24 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] MINOR: Updated configuration docs with RocksDBConfigSetter#close 
(#6784)

--
[...truncated 2.61 MB...]

org.apache.kafka.connect.data.SchemaBuilderTest > testArrayBuilder STARTED

org.apache.kafka.connect.data.SchemaBuilderTest > testArrayBuilder PASSED

org.apache.kafka.connect.data.SchemaBuilderTest > 
testArrayBuilderInvalidDefault STARTED

org.apache.kafka.connect.data.SchemaBuilderTest > 
testArrayBuilderInvalidDefault PASSED

org.apache.kafka.connect.data.SchemaBuilderTest > testDuplicateFields STARTED

org.apache.kafka.connect.data.SchemaBuilderTest > testDuplicateFields PASSED

org.apache.kafka.connect.data.SchemaBuilderTest > 
testStringBuilderInvalidDefault STARTED

org.apache.kafka.connect.data.SchemaBuilderTest > 
testStringBuilderInvalidDefault PASSED

org.apache.kafka.connect.data.SchemaBuilderTest > testEmptyStruct STARTED

org.apache.kafka.connect.data.SchemaBuilderTest > testEmptyStruct PASSED

org.apache.kafka.connect.data.SchemaBuilderTest > testMapBuilderInvalidDefault 
STARTED

org.apache.kafka.connect.data.SchemaBuilderTest > testMapBuilderInvalidDefault 
PASSED

org.apache.kafka.connect.data.SchemaBuilderTest > 
testInt64BuilderInvalidDefault STARTED

org.apache.kafka.connect.data.SchemaBuilderTest > 
testInt64BuilderInvalidDefault PASSED

org.apache.kafka.connect.data.SchemaBuilderTest > testInt8Builder STARTED

org.apache.kafka.connect.data.SchemaBuilderTest > testInt8Builder PASSED

org.apache.kafka.connect.data.SchemaBuilderTest > testFieldSchemaNull STARTED

org.apache.kafka.connect.data.SchemaBuilderTest > testFieldSchemaNull PASSED

org.apache.kafka.connect.data.SchemaBuilderTest > 
testInt32BuilderInvalidDefault STARTED

org.apache.kafka.connect.data.SchemaBuilderTest > 
testInt32BuilderInvalidDefault PASSED

org.apache.kafka.connect.data.SchemaBuilderTest > testInt8BuilderInvalidDefault 
STARTED

org.apache.kafka.connect.data.SchemaBuilderTest > testInt8BuilderInvalidDefault 
PASSED

org.apache.kafka.connect.data.SchemaBuilderTest > testMapKeySchemaNull STARTED

org.apache.kafka.connect.data.SchemaBuilderTest > testMapKeySchemaNull PASSED

org.apache.kafka.connect.data.SchemaBuilderTest > 
testFloatBuilderInvalidDefault STARTED

org.apache.kafka.connect.data.SchemaBuilderTest > 
testFloatBuilderInvalidDefault PASSED

org.apache.kafka.connect.data.SchemaBuilderTest > testInt32Builder STARTED

org.apache.kafka.connect.data.SchemaBuilderTest > testInt32Builder PASSED

org.apache.kafka.connect.data.SchemaBuilderTest > testBooleanBuilder STARTED

org.apache.kafka.connect.data.SchemaBuilderTest > testBooleanBuilder PASSED

org.apache.kafka.connect.data.SchemaBuilderTest > testArraySchemaNull STARTED

org.apache.kafka.connect.data.SchemaBuilderTest > testArraySchemaNull PASSED

org.apache.kafka.connect.data.SchemaBuilderTest > testDoubleBuilder STARTED

org.apache.kafka.connect.data.SchemaBuilderTest > testDoubleBuilder PASSED

org.apache.kafka.connect.storage.SimpleHeaderConverterTest > 
shouldConvertEmptyListToListWithoutSchema STARTED

org.apache.kafka.connect.storage.SimpleHeaderConverterTest > 
shouldConvertEmptyListToListWithoutSchema PASSED

org.apache.kafka.connect.storage.SimpleHeaderConverterTest > 
shouldConvertStringWithQuotesAndOtherDelimiterCharacters STARTED

org.apache.kafka.connect.storage.SimpleHeaderConverterTest > 
shouldConvertStringWithQuotesAndOtherDelimiterCharacters PASSED

org.apache.kafka.connect.storage.SimpleHeaderConverterTest > 
shouldConvertListWithMixedValuesToListWithoutSchema STARTED

org.apache.kafka.connect.storage.SimpleHeaderConverterTest > 
shouldConvertListWithMixedValuesToListWithoutSchema PASSED

org.apache.kafka.connect.storage.SimpleHeaderConverterTest > 
shouldConvertMapWithStringKeys STARTED

org.apache.kafka.connect.storage.SimpleHeaderConverterTest > 
shouldConvertMapWithStringKeys PASSED

org.apache.kafka.connect.storage.SimpleHeaderConverterTest > 
shouldParseStringOfMapWithStringValuesWithWhitespaceAsMap STARTED

org.apache.kafka.connect.storage.SimpleHeaderConverterTest > 
shouldParseStringOfMapWithStringValuesWithWhitespaceAsMap PASSED

org.apache.kafka.connect.storage.SimpleHeaderConverterTest > 
shouldConvertMapWithStringKeysAndShortValues STARTED

org.apache.kafka.connect.storage.SimpleHeaderConverterTest > 
shouldConvertMapWithStringKeysAndShortValues PASSED

org.apache.kafka.connect.storage.SimpleHeaderConverterTest > 
shouldParseStringOfMapWithStringValuesWithoutWhitespaceAsMap STARTED

org.apache.kafka.connect.storage.SimpleHeaderConverterTest > 
shouldParseStringOfMapWithStringValuesWithoutWhitespaceAsMap PASSED

org.apache.kafka.connect.storage.SimpleHeaderConverterTest > 
shouldConvertSimpleString STARTED

org.apache.kafka.connect.storage.SimpleHeaderConverterTest > 
shouldConvertSimpleString PASSED

org.apache.kafka.connect.storage.SimpleHeaderConverterTest > 

Build failed in Jenkins: kafka-trunk-jdk8 #3674

2019-05-24 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] MINOR: Updated configuration docs with RocksDBConfigSetter#close 
(#6784)

--
[...truncated 2.48 MB...]

org.apache.kafka.connect.file.FileStreamSourceConnectorTest > 
testInvalidBatchSize PASSED

org.apache.kafka.connect.file.FileStreamSourceConnectorTest > 
testConnectorConfigValidation STARTED

org.apache.kafka.connect.file.FileStreamSourceConnectorTest > 
testConnectorConfigValidation PASSED

org.apache.kafka.connect.file.FileStreamSourceConnectorTest > 
testMultipleSourcesInvalid STARTED

org.apache.kafka.connect.file.FileStreamSourceConnectorTest > 
testMultipleSourcesInvalid PASSED

> Task :streams:streams-scala:test

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionWithNamedRepartitionTopic STARTED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionWithNamedRepartitionTopic PASSED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionJava STARTED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionJava PASSED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegion STARTED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegion PASSED

org.apache.kafka.streams.scala.kstream.SuppressedTest > 
Suppressed.untilWindowCloses should produce the correct suppression STARTED

org.apache.kafka.streams.scala.kstream.SuppressedTest > 
Suppressed.untilWindowCloses should produce the correct suppression PASSED

org.apache.kafka.streams.scala.kstream.SuppressedTest > 
Suppressed.untilTimeLimit should produce the correct suppression STARTED

org.apache.kafka.streams.scala.kstream.SuppressedTest > 
Suppressed.untilTimeLimit should produce the correct suppression PASSED

org.apache.kafka.streams.scala.kstream.SuppressedTest > BufferConfig.maxRecords 
should produce the correct buffer config STARTED

org.apache.kafka.streams.scala.kstream.SuppressedTest > BufferConfig.maxRecords 
should produce the correct buffer config PASSED

org.apache.kafka.streams.scala.kstream.SuppressedTest > BufferConfig.maxBytes 
should produce the correct buffer config STARTED

org.apache.kafka.streams.scala.kstream.SuppressedTest > BufferConfig.maxBytes 
should produce the correct buffer config PASSED

org.apache.kafka.streams.scala.kstream.SuppressedTest > BufferConfig.unbounded 
should produce the correct buffer config STARTED

org.apache.kafka.streams.scala.kstream.SuppressedTest > BufferConfig.unbounded 
should produce the correct buffer config PASSED

org.apache.kafka.streams.scala.kstream.SuppressedTest > BufferConfig should 
support very long chains of factory methods STARTED

org.apache.kafka.streams.scala.kstream.SuppressedTest > BufferConfig should 
support very long chains of factory methods PASSED

org.apache.kafka.streams.scala.kstream.ConsumedTest > Create a Consumed should 
create a Consumed with Serdes STARTED

org.apache.kafka.streams.scala.kstream.ConsumedTest > Create a Consumed should 
create a Consumed with Serdes PASSED

org.apache.kafka.streams.scala.kstream.ConsumedTest > Create a Consumed with 
timestampExtractor and resetPolicy should create a Consumed with Serdes, 
timestampExtractor and resetPolicy STARTED

org.apache.kafka.streams.scala.kstream.ConsumedTest > Create a Consumed with 
timestampExtractor and resetPolicy should create a Consumed with Serdes, 
timestampExtractor and resetPolicy PASSED

org.apache.kafka.streams.scala.kstream.ConsumedTest > Create a Consumed with 
timestampExtractor should create a Consumed with Serdes and timestampExtractor 
STARTED

org.apache.kafka.streams.scala.kstream.ConsumedTest > Create a Consumed with 
timestampExtractor should create a Consumed with Serdes and timestampExtractor 
PASSED

org.apache.kafka.streams.scala.kstream.ConsumedTest > Create a Consumed with 
resetPolicy should create a Consumed with Serdes and resetPolicy STARTED

org.apache.kafka.streams.scala.kstream.ConsumedTest > Create a Consumed with 
resetPolicy should create a Consumed with Serdes and resetPolicy PASSED

org.apache.kafka.streams.scala.kstream.ProducedTest > Create a Produced should 
create a Produced with Serdes STARTED

org.apache.kafka.streams.scala.kstream.ProducedTest > Create a Produced should 
create a Produced with Serdes PASSED

org.apache.kafka.streams.scala.kstream.ProducedTest > Create a Produced with 
timestampExtractor and resetPolicy should create a Consumed with Serdes, 
timestampExtractor and resetPolicy STARTED

org.apache.kafka.streams.scala.kstream.ProducedTest > Create a Produced with 
timestampExtractor and resetPolicy should create a Consumed with 

[jira] [Created] (KAFKA-8428) Cleanup LogValidator#validateMessagesAndAssignOffsetsCompressed to assume single record batch only

2019-05-24 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-8428:


 Summary: Cleanup 
LogValidator#validateMessagesAndAssignOffsetsCompressed to assume single record 
batch only
 Key: KAFKA-8428
 URL: https://issues.apache.org/jira/browse/KAFKA-8428
 Project: Kafka
  Issue Type: Improvement
Reporter: Guozhang Wang
Assignee: Guozhang Wang


Today, the client -> server record batching protocol works like this:

1. With magic v2, we always require a single batch within compressed set. And 
inside the LogValidator#validateMessagesAndAssignOffsetsCompressed we assume so 
already.

2. With magic v1, our code actually also assumes one record batch, since 
whenever inPlaceAssignment is true we assume one batch only; however with magic 
v1 it is still possible that inPlaceAssignment == true.

3. With magic v0, our code does handle the case with multiple record batch, 
since with v0 inPlaceAssignment is always false.

This makes the logic of LogValidator#validateMessagesAndAssignOffsetsCompressed 
quite twisted and complicated.

Since all standard clients implementation we've known so far actually all wrap 
a single batch with compressed (of course, we cannot guarantee this is the case 
for all clients in the wild, but I think the chance of multiple batches with 
compressed records should really be rare), I think it's better just to make it 
as a universal requirement for all versions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-2.3-jdk8 #13

2019-05-24 Thread Apache Jenkins Server
See 


Changes:

[matthias] MINOR: fix Streams version-probing system test (#6764)

--
[...truncated 2.91 MB...]
kafka.server.ListOffsetsRequestTest > testCurrentEpochValidation PASSED

kafka.server.ListOffsetsRequestTest > testResponseIncludesLeaderEpoch STARTED

kafka.server.ListOffsetsRequestTest > testResponseIncludesLeaderEpoch PASSED

kafka.server.DelayedOperationTest > testRequestPurge STARTED

kafka.server.DelayedOperationTest > testRequestPurge PASSED

kafka.server.DelayedOperationTest > testRequestExpiry STARTED

kafka.server.DelayedOperationTest > testRequestExpiry PASSED

kafka.server.DelayedOperationTest > 
shouldReturnNilOperationsOnCancelForKeyWhenKeyDoesntExist STARTED

kafka.server.DelayedOperationTest > 
shouldReturnNilOperationsOnCancelForKeyWhenKeyDoesntExist PASSED

kafka.server.DelayedOperationTest > testDelayedOperationLockOverride STARTED

kafka.server.DelayedOperationTest > testDelayedOperationLockOverride PASSED

kafka.server.DelayedOperationTest > testTryCompleteLockContention STARTED

kafka.server.DelayedOperationTest > testTryCompleteLockContention PASSED

kafka.server.DelayedOperationTest > testTryCompleteWithMultipleThreads STARTED

kafka.server.DelayedOperationTest > testTryCompleteWithMultipleThreads PASSED

kafka.server.DelayedOperationTest > 
shouldCancelForKeyReturningCancelledOperations STARTED

kafka.server.DelayedOperationTest > 
shouldCancelForKeyReturningCancelledOperations PASSED

kafka.server.DelayedOperationTest > testRequestSatisfaction STARTED

kafka.server.DelayedOperationTest > testRequestSatisfaction PASSED

kafka.server.DelayedOperationTest > testDelayedOperationLock STARTED

kafka.server.DelayedOperationTest > testDelayedOperationLock PASSED

kafka.server.OffsetsForLeaderEpochRequestTest > 
testOffsetsForLeaderEpochErrorCodes STARTED

kafka.server.OffsetsForLeaderEpochRequestTest > 
testOffsetsForLeaderEpochErrorCodes PASSED

kafka.server.OffsetsForLeaderEpochRequestTest > testCurrentEpochValidation 
STARTED

kafka.server.OffsetsForLeaderEpochRequestTest > testCurrentEpochValidation 
PASSED

kafka.network.DynamicConnectionQuotaTest > testDynamicConnectionQuota STARTED

kafka.network.DynamicConnectionQuotaTest > testDynamicConnectionQuota PASSED

kafka.network.DynamicConnectionQuotaTest > testDynamicListenerConnectionQuota 
STARTED

kafka.network.DynamicConnectionQuotaTest > testDynamicListenerConnectionQuota 
PASSED

kafka.network.SocketServerTest > testGracefulClose STARTED

kafka.network.SocketServerTest > testGracefulClose PASSED

kafka.network.SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingAlreadyDone STARTED

kafka.network.SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingAlreadyDone PASSED

kafka.network.SocketServerTest > controlThrowable STARTED

kafka.network.SocketServerTest > controlThrowable PASSED

kafka.network.SocketServerTest > testRequestMetricsAfterStop STARTED

kafka.network.SocketServerTest > testRequestMetricsAfterStop PASSED

kafka.network.SocketServerTest > testConnectionIdReuse STARTED

kafka.network.SocketServerTest > testConnectionIdReuse PASSED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
STARTED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
PASSED

kafka.network.SocketServerTest > testProcessorMetricsTags STARTED

kafka.network.SocketServerTest > testProcessorMetricsTags PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIp STARTED

kafka.network.SocketServerTest > testMaxConnectionsPerIp PASSED

kafka.network.SocketServerTest > testConnectionId STARTED

kafka.network.SocketServerTest > testConnectionId PASSED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics STARTED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics PASSED

kafka.network.SocketServerTest > testNoOpAction STARTED

kafka.network.SocketServerTest > testNoOpAction PASSED

kafka.network.SocketServerTest > simpleRequest STARTED

kafka.network.SocketServerTest > simpleRequest PASSED

kafka.network.SocketServerTest > closingChannelException STARTED

kafka.network.SocketServerTest > closingChannelException PASSED

kafka.network.SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingInProgress STARTED

kafka.network.SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingInProgress PASSED

kafka.network.SocketServerTest > testIdleConnection STARTED

kafka.network.SocketServerTest > testIdleConnection PASSED

kafka.network.SocketServerTest > 
testClientDisconnectionWithStagedReceivesFullyProcessed STARTED

kafka.network.SocketServerTest > 
testClientDisconnectionWithStagedReceivesFullyProcessed PASSED

kafka.network.SocketServerTest > testZeroMaxConnectionsPerIp STARTED

kafka.network.SocketServerTest 

Build failed in Jenkins: kafka-trunk-jdk11 #570

2019-05-24 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: fix Streams version-probing system test (#6764)

--
[...truncated 2.48 MB...]
org.apache.kafka.connect.source.SourceRecordTest > 
shouldDuplicateRecordUsingNewHeaders STARTED

org.apache.kafka.connect.source.SourceRecordTest > 
shouldDuplicateRecordUsingNewHeaders PASSED

org.apache.kafka.connect.source.SourceRecordTest > shouldModifyRecordHeader 
STARTED

org.apache.kafka.connect.source.SourceRecordTest > shouldModifyRecordHeader 
PASSED

org.apache.kafka.connect.source.SourceRecordTest > 
shouldCreateSinkRecordWithHeaders STARTED

org.apache.kafka.connect.source.SourceRecordTest > 
shouldCreateSinkRecordWithHeaders PASSED

org.apache.kafka.connect.source.SourceRecordTest > 
shouldCreateSinkRecordWithEmtpyHeaders STARTED

org.apache.kafka.connect.source.SourceRecordTest > 
shouldCreateSinkRecordWithEmtpyHeaders PASSED

org.apache.kafka.connect.sink.SinkRecordTest > 
shouldDuplicateRecordAndCloneHeaders STARTED

org.apache.kafka.connect.sink.SinkRecordTest > 
shouldDuplicateRecordAndCloneHeaders PASSED

org.apache.kafka.connect.sink.SinkRecordTest > 
shouldCreateSinkRecordWithEmptyHeaders STARTED

org.apache.kafka.connect.sink.SinkRecordTest > 
shouldCreateSinkRecordWithEmptyHeaders PASSED

org.apache.kafka.connect.sink.SinkRecordTest > 
shouldDuplicateRecordUsingNewHeaders STARTED

org.apache.kafka.connect.sink.SinkRecordTest > 
shouldDuplicateRecordUsingNewHeaders PASSED

org.apache.kafka.connect.sink.SinkRecordTest > shouldModifyRecordHeader STARTED

org.apache.kafka.connect.sink.SinkRecordTest > shouldModifyRecordHeader PASSED

org.apache.kafka.connect.sink.SinkRecordTest > 
shouldCreateSinkRecordWithHeaders STARTED

org.apache.kafka.connect.sink.SinkRecordTest > 
shouldCreateSinkRecordWithHeaders PASSED

> Task :streams:streams-scala:test

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionWithNamedRepartitionTopic STARTED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionWithNamedRepartitionTopic PASSED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionJava STARTED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionJava PASSED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegion STARTED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegion PASSED

org.apache.kafka.streams.scala.kstream.SuppressedTest > 
Suppressed.untilWindowCloses should produce the correct suppression STARTED

org.apache.kafka.streams.scala.kstream.SuppressedTest > 
Suppressed.untilWindowCloses should produce the correct suppression PASSED

org.apache.kafka.streams.scala.kstream.SuppressedTest > 
Suppressed.untilTimeLimit should produce the correct suppression STARTED

org.apache.kafka.streams.scala.kstream.SuppressedTest > 
Suppressed.untilTimeLimit should produce the correct suppression PASSED

org.apache.kafka.streams.scala.kstream.SuppressedTest > BufferConfig.maxRecords 
should produce the correct buffer config STARTED

org.apache.kafka.streams.scala.kstream.SuppressedTest > BufferConfig.maxRecords 
should produce the correct buffer config PASSED

org.apache.kafka.streams.scala.kstream.SuppressedTest > BufferConfig.maxBytes 
should produce the correct buffer config STARTED

org.apache.kafka.streams.scala.kstream.SuppressedTest > BufferConfig.maxBytes 
should produce the correct buffer config PASSED

org.apache.kafka.streams.scala.kstream.SuppressedTest > BufferConfig.unbounded 
should produce the correct buffer config STARTED

org.apache.kafka.streams.scala.kstream.SuppressedTest > BufferConfig.unbounded 
should produce the correct buffer config PASSED

org.apache.kafka.streams.scala.kstream.SuppressedTest > BufferConfig should 
support very long chains of factory methods STARTED

org.apache.kafka.streams.scala.kstream.SuppressedTest > BufferConfig should 
support very long chains of factory methods PASSED

org.apache.kafka.streams.scala.kstream.ConsumedTest > Create a Consumed should 
create a Consumed with Serdes STARTED

org.apache.kafka.streams.scala.kstream.ConsumedTest > Create a Consumed should 
create a Consumed with Serdes PASSED

org.apache.kafka.streams.scala.kstream.ConsumedTest > Create a Consumed with 
timestampExtractor and resetPolicy should create a Consumed with Serdes, 
timestampExtractor and resetPolicy STARTED

org.apache.kafka.streams.scala.kstream.ConsumedTest > Create a Consumed with 
timestampExtractor and resetPolicy should create a Consumed with Serdes, 
timestampExtractor and resetPolicy PASSED

org.apache.kafka.streams.scala.kstream.ConsumedTest > Create a 

Re: [DISCUSS] KIP-466: Add support for List serialization and deserialization

2019-05-24 Thread Development
Hey,

- did we consider to make the return type (ie, ArrayList, vs
LinkesList) configurable or encode it the serialized bytes?

Not sure about this one. Could you elaborate?

- atm the size of each element is encoded individually; did we consider
an optimization for fixed size elements (like Long) to avoid this overhead?

I cannot think of any clean way to do so. How would you see it?

Btw I resolved all your comments under PR

Best,
Daniyar Yeralin

> On May 24, 2019, at 12:01 AM, Matthias J. Sax  wrote:
> 
> Thanks for the KIP. I also had a look into the PR and have two follow up
> question:
> 
> 
> - did we consider to make the return type (ie, ArrayList, vs
> LinkesList) configurable or encode it the serialized bytes?
> 
> - atm the size of each element is encoded individually; did we consider
> an optimization for fixed size elements (like Long) to avoid this overhead?
> 
> 
> 
> -Matthias
> 
> On 5/15/19 6:05 PM, John Roesler wrote:
>> Sounds good!
>> 
>> On Tue, May 14, 2019 at 9:21 AM Development  wrote:
>>> 
>>> Hey,
>>> 
>>> I think it the proposal is finalized, no one raised any concerns. Shall we 
>>> call it for a [VOTE]?
>>> 
>>> Best,
>>> Daniyar Yeralin
>>> 
 On May 10, 2019, at 10:17 AM, John Roesler  wrote:
 
 Good observation, Daniyar.
 
 Maybe we should just not implement support for serdeFrom.
 
 We can always add it later, but I think you're right, we need some
 kind of more sophisticated support, or at least a second argument for
 the inner class.
 
 For now, it seems like most use cases would be satisfied without
 serdeFrom(...List...)
 
 -John
 
 On Fri, May 10, 2019 at 8:57 AM Development  wrote:
> 
> Hi,
> 
> I was trying to add some test cases for the list serde, and it led me to 
> this class `org.apache.kafka.common.serialization.SerializationTest`. I 
> saw that it relies on method 
> `org.apache.kafka.common.serialization.serdeFrom(Class type)`
> 
> Now, I’m not sure how to adapt List serde for this method, since it 
> will be a “nested class”. What is the best approach in this case?
> 
> I remember that in Jackson for example, one uses a TypeFactory, and 
> constructs “collectionType” of two classes. For example, 
> `constructCollectionType(List.class, String.class).getClass()`. I don’t 
> think it applies here.
> 
> Any ideas?
> 
> Best,
> Daniyar Yeralin
> 
>> On May 9, 2019, at 2:10 PM, Development  wrote:
>> 
>> Hey Sophie,
>> 
>> Thank you for your input. I think I’d rather finish this KIP as is, and 
>> then open a new one for the Collections (if everyone agrees). I don’t 
>> want to extend the current KIP-466, since most of the work is already 
>> done for it.
>> 
>> Meanwhile, I’ll start adding some test cases for this new list serde 
>> since this discussion seems to be approaching its logical end.
>> 
>> Best,
>> Daniyar Yeralin
>> 
>>> On May 9, 2019, at 1:35 PM, Sophie Blee-Goldman  
>>> wrote:
>>> 
>>> Good point about serdes for other Collections. On the one hand I'd guess
>>> that non-List Collections are probably relatively rare in practice (if
>>> anyone disagrees please correct me!) but on the other hand, a) even if 
>>> just
>>> a small number of people benefit I think it's worth the extra effort 
>>> and b)
>>> if we do end up needing/wanting them in the future it would save us a 
>>> KIP
>>> to just add them now. Personally I feel it would make sense to expand 
>>> the
>>> scope of this KIP a bit to include all Collections as a logical unit, 
>>> but
>>> the ROI could be low..
>>> 
>>> (I know of at least one instance in the unit tests where a Set serde 
>>> could
>>> be useful, and there may be more)
>>> 
>>> On Thu, May 9, 2019 at 7:27 AM Development  wrote:
>>> 
 Hey,
 
 I don’t see any replies. Seems like this proposal can be finalized and
 called for a vote?
 
 Also I’ve been thinking. Do we need more serdes for other Collections?
 Like queue or set for example
 
 Best,
 Daniyar Yeralin
 
> On May 8, 2019, at 2:28 PM, John Roesler  wrote:
> 
> Hi Daniyar,
> 
> No worries about the procedural stuff. Prior experience with KIPs is
> not required :)
> 
> I was just trying to help you propose this stuff in a way that the
> others will find easy to review.
> 
> Thanks for updating the KIP. Thanks to the others for helping out with
> the syntax.
> 
> Given these updates, I'm curious if anyone else has feedback about
> this proposal. Personally, I think it sounds fine!
> 
> -John
> 
> On Wed, May 8, 2019 at 1:01 PM 

[Discussion] KIP-473: Add support for sasl login authentication callback handler to KafkaLog4jAppender

2019-05-24 Thread ryan
KIP-425 adds SASL support to the KafkaLog4jAppender however it does not include 
the credential handling improvements added with KIP-86. This is a proposal is 
to expose `sasl.login.callback.handler.class` for use by the underlying 
producer. 

KIP-473
https://cwiki.apache.org/confluence/display/KAFKA/KIP-473%3A+Enable+KafkaLog4JAppender+to+use+SASL+Authentication+Callback+Handlers

KIP-425
https://cwiki.apache.org/confluence/display/KAFKA/KIP-425%3A+Add+some+Log4J+Kafka+Appender+Properties+for+Producing+to+Secured+Brokers

KIP-86
 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-86%3A+Configurable+SASL+callback+handlers



Build failed in Jenkins: kafka-trunk-jdk8 #3673

2019-05-24 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: fix Streams version-probing system test (#6764)

--
[...truncated 2.48 MB...]

org.apache.kafka.connect.util.KafkaBasedLogTest > testSendAndReadToEnd STARTED

org.apache.kafka.connect.util.KafkaBasedLogTest > testSendAndReadToEnd PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testConsumerError STARTED

org.apache.kafka.connect.util.KafkaBasedLogTest > testConsumerError PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testProducerError STARTED

org.apache.kafka.connect.util.KafkaBasedLogTest > testProducerError PASSED

org.apache.kafka.connect.converters.FloatConverterTest > testBytesNullToNumber 
STARTED

org.apache.kafka.connect.converters.FloatConverterTest > testBytesNullToNumber 
PASSED

org.apache.kafka.connect.converters.FloatConverterTest > 
testSerializingIncorrectType STARTED

org.apache.kafka.connect.converters.FloatConverterTest > 
testSerializingIncorrectType PASSED

org.apache.kafka.connect.converters.FloatConverterTest > 
testDeserializingHeaderWithTooManyBytes STARTED

org.apache.kafka.connect.converters.FloatConverterTest > 
testDeserializingHeaderWithTooManyBytes PASSED

org.apache.kafka.connect.converters.FloatConverterTest > testNullToBytes STARTED

org.apache.kafka.connect.converters.FloatConverterTest > testNullToBytes PASSED

org.apache.kafka.connect.converters.FloatConverterTest > 
testSerializingIncorrectHeader STARTED

org.apache.kafka.connect.converters.FloatConverterTest > 
testSerializingIncorrectHeader PASSED

org.apache.kafka.connect.converters.FloatConverterTest > 
testDeserializingDataWithTooManyBytes STARTED

org.apache.kafka.connect.converters.FloatConverterTest > 
testDeserializingDataWithTooManyBytes PASSED

org.apache.kafka.connect.converters.FloatConverterTest > 
testConvertingSamplesToAndFromBytes STARTED

org.apache.kafka.connect.converters.FloatConverterTest > 
testConvertingSamplesToAndFromBytes PASSED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testBytesNullToNumber STARTED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testBytesNullToNumber PASSED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testSerializingIncorrectType STARTED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testSerializingIncorrectType PASSED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testDeserializingHeaderWithTooManyBytes STARTED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testDeserializingHeaderWithTooManyBytes PASSED

org.apache.kafka.connect.converters.IntegerConverterTest > testNullToBytes 
STARTED

org.apache.kafka.connect.converters.IntegerConverterTest > testNullToBytes 
PASSED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testSerializingIncorrectHeader STARTED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testSerializingIncorrectHeader PASSED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testDeserializingDataWithTooManyBytes STARTED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testDeserializingDataWithTooManyBytes PASSED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testConvertingSamplesToAndFromBytes STARTED

org.apache.kafka.connect.converters.IntegerConverterTest > 
testConvertingSamplesToAndFromBytes PASSED

org.apache.kafka.connect.converters.ByteArrayConverterTest > 
testFromConnectSchemaless STARTED

org.apache.kafka.connect.converters.ByteArrayConverterTest > 
testFromConnectSchemaless PASSED

org.apache.kafka.connect.converters.ByteArrayConverterTest > testToConnectNull 
STARTED

org.apache.kafka.connect.converters.ByteArrayConverterTest > testToConnectNull 
PASSED

org.apache.kafka.connect.converters.ByteArrayConverterTest > 
testFromConnectBadSchema STARTED

org.apache.kafka.connect.converters.ByteArrayConverterTest > 
testFromConnectBadSchema PASSED

org.apache.kafka.connect.converters.ByteArrayConverterTest > 
testFromConnectNull STARTED

org.apache.kafka.connect.converters.ByteArrayConverterTest > 
testFromConnectNull PASSED

org.apache.kafka.connect.converters.ByteArrayConverterTest > testToConnect 
STARTED

org.apache.kafka.connect.converters.ByteArrayConverterTest > testToConnect 
PASSED

org.apache.kafka.connect.converters.ByteArrayConverterTest > testFromConnect 
STARTED

org.apache.kafka.connect.converters.ByteArrayConverterTest > testFromConnect 
PASSED

org.apache.kafka.connect.converters.ByteArrayConverterTest > 
testFromConnectInvalidValue STARTED

org.apache.kafka.connect.converters.ByteArrayConverterTest > 
testFromConnectInvalidValue PASSED

org.apache.kafka.connect.converters.DoubleConverterTest > testBytesNullToNumber 
STARTED

org.apache.kafka.connect.converters.DoubleConverterTest > testBytesNullToNumber 
PASSED

org.apache.kafka.connect.converters.DoubleConverterTest > 
testSerializingIncorrectType STARTED


Re: Contributor Apply

2019-05-24 Thread Matthias J. Sax
Added you to the list on contributors. You can not self-assign tickets.

Please read
https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Code+Changes
to get started.



-Matthias

On 5/24/19 11:47 AM, gongfuboych...@gmail.com wrote:
> hi, 
> 
> my name is LiMing Zhou from China, who is java developer. want to be 
> contributor to kafka project
> JIRA ID: GongFuBoy
> 
> thanks 
> LiMing Zhou
> 



signature.asc
Description: OpenPGP digital signature


[DISCUSS] KIP-470: Complete SASL integration for Kafka Log4j Appender (KIP-425)

2019-05-24 Thread ryan
KIP-425 adds SASL support to the Kafka Log4j appender however it does not 
include the improvements introduced with KIP-86. This KIP aims to do just that 
by exposing `sasl.login.callback.handler.class` for use by the appender's 
producer. 

https://cwiki.apache.org/confluence/display/KAFKA/KIP-470%3A+Enable+KafkaLog4JAppender+to+use+SASL+Authentication+Callback+Handlers




[jira] [Created] (KAFKA-8427) error while cleanup under windows for EmbeddedKafkaCluster

2019-05-24 Thread Sukumaar Mane (JIRA)
Sukumaar Mane created KAFKA-8427:


 Summary: error while cleanup under windows for EmbeddedKafkaCluster
 Key: KAFKA-8427
 URL: https://issues.apache.org/jira/browse/KAFKA-8427
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 2.2.0, 2.0.0
Reporter: Sukumaar Mane


Unable to run a simple test case for EmbeddedKafkaCluster.
 Running below simple code (which is actually code snippet from 
*org.apache.kafka.streams.KafkaStreamsTest* class)
{code:java}
public class KTest {
private static final int NUM_BROKERS = 1;
// We need this to avoid the KafkaConsumer hanging on poll
// (this may occur if the test doesn't complete quickly enough)
@ClassRule
public static final EmbeddedKafkaCluster CLUSTER = new 
EmbeddedKafkaCluster(NUM_BROKERS);
private static final int NUM_THREADS = 2;
private final StreamsBuilder builder = new StreamsBuilder();
@Rule
public TestName testName = new TestName();
private KafkaStreams globalStreams;
private Properties props;

@Before
public void before() {
props = new Properties();
props.put(StreamsConfig.APPLICATION_ID_CONFIG, "appId");
props.put(StreamsConfig.CLIENT_ID_CONFIG, "clientId");
props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, 
CLUSTER.bootstrapServers());
props.put(StreamsConfig.METRIC_REPORTER_CLASSES_CONFIG, 
MockMetricsReporter.class.getName());
props.put(StreamsConfig.STATE_DIR_CONFIG, 
TestUtils.tempDirectory().getPath());
props.put(StreamsConfig.NUM_STREAM_THREADS_CONFIG, NUM_THREADS);
globalStreams = new KafkaStreams(builder.build(), props);
}

@After
public void cleanup() {
if (globalStreams != null) {
globalStreams.close();
}
}

@Test
public void thisIsFirstFakeTest() {
assert true;
}
}
{code}
But getting these error message at the time of cleanup
{code:java}
java.nio.file.FileSystemException: 
C:\Users\Sukumaar\AppData\Local\Temp\kafka-3445189010908127083\version-2\log.1: 
The process cannot access the file because it is being used by another process.


at 
sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
at 
sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
at 
sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
at java.nio.file.Files.delete(Files.java:1126)
at org.apache.kafka.common.utils.Utils$2.visitFile(Utils.java:753)
at org.apache.kafka.common.utils.Utils$2.visitFile(Utils.java:742)
at java.nio.file.Files.walkFileTree(Files.java:2670)
at java.nio.file.Files.walkFileTree(Files.java:2742)
at org.apache.kafka.common.utils.Utils.delete(Utils.java:742)
at kafka.zk.EmbeddedZookeeper.shutdown(EmbeddedZookeeper.scala:65)
at 
org.apache.kafka.streams.integration.utils.EmbeddedKafkaCluster.stop(EmbeddedKafkaCluster.java:122)
at 
org.apache.kafka.streams.integration.utils.EmbeddedKafkaCluster.after(EmbeddedKafkaCluster.java:151)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:50)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
at 
com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
at 
com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
at 
com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
at 
com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)

{code}
One similar issue had been reported and marked as resolved but still getting 
the error while cleanup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8426) KIP 421 Bug: ConfigProvider configs param inconsistent with KIP-297

2019-05-24 Thread TEJAL ADSUL (JIRA)
TEJAL ADSUL created KAFKA-8426:
--

 Summary: KIP 421 Bug: ConfigProvider configs param inconsistent 
with KIP-297
 Key: KAFKA-8426
 URL: https://issues.apache.org/jira/browse/KAFKA-8426
 Project: Kafka
  Issue Type: Bug
Reporter: TEJAL ADSUL
 Fix For: 2.3.0


According to KIP-297 a parameter is passed to ConfigProvider with syntax 
"config.providers.\{name}.param.\{param-name}". Currently AbstractConfig allows 
parameters of the format "config.providers.\{name}.\{param-name}". With this 
fix AbstractConfig will be consistent with KIP-297 syntax.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8425) KIP 421 Bug: Modifying Immutable Originals Map results in Java exception

2019-05-24 Thread TEJAL ADSUL (JIRA)
TEJAL ADSUL created KAFKA-8425:
--

 Summary: KIP 421 Bug: Modifying Immutable Originals Map results in 
Java exception 
 Key: KAFKA-8425
 URL: https://issues.apache.org/jira/browse/KAFKA-8425
 Project: Kafka
  Issue Type: Bug
  Components: config
Reporter: TEJAL ADSUL
 Fix For: 2.3.0


The originals map passed to the AbstractConfig class can be immutable. The 
resolveConfigVariable function was modifying the originals map. If this map is 
immutable it will result in an exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[Discussion] KIP-473: Add sasl login callback handler support to KafkaLog4jAppender

2019-05-24 Thread Ryan P
Apologies in advance if this thread makes it across your desk more than
once. I tried the "Start a new discussion" feature in the UI but it did not
appear to work. As such I went ahead and started this thread the old
fashioned way.


KIP-425 adds SASL support to the KafkaLog4jAppender however it does not
include the credential handling improvements added with KIP-86. This is a
proposal is to expose `sasl.login.callback.handler.class` for use by the
underlying producer.

KIP-473
https://cwiki.apache.org/confluence/display/KAFKA/KIP-473%3A+Enable+KafkaLog4JAppender+to+use+SASL+Authentication+Callback+Handlers

KIP-425
https://cwiki.apache.org/confluence/display/KAFKA/KIP-425%3A+Add+some+Log4J+Kafka+Appender+Properties+for+Producing+to+Secured+Brokers

KIP-86

https://cwiki.apache.org/confluence/display/KAFKA/KIP-86%3A+Configurable+SASL+callback+handlers


Build failed in Jenkins: kafka-trunk-jdk11 #569

2019-05-24 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Fix a few compiler warnings (#6767)

--
[...truncated 2.49 MB...]
org.apache.kafka.connect.json.JsonConverterTest > 
testJsonSchemaMetadataTranslation STARTED

org.apache.kafka.connect.json.JsonConverterTest > 
testJsonSchemaMetadataTranslation PASSED

org.apache.kafka.connect.json.JsonConverterTest > bytesToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > bytesToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > shortToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > shortToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > intToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > intToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndNullValueToJson 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndNullValueToJson 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > timestampToConnectOptional 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > timestampToConnectOptional 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > structToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > structToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > stringToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > stringToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndArrayToJson 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndArrayToJson 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > byteToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > byteToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaPrimitiveToConnect 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaPrimitiveToConnect 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > byteToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > byteToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > intToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > intToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > dateToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > dateToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > noSchemaToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > noSchemaToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > noSchemaToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > noSchemaToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndPrimitiveToJson 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullSchemaAndPrimitiveToJson 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
decimalToConnectOptionalWithDefaultValue STARTED

org.apache.kafka.connect.json.JsonConverterTest > 
decimalToConnectOptionalWithDefaultValue PASSED

org.apache.kafka.connect.json.JsonConverterTest > mapToJsonStringKeys STARTED

org.apache.kafka.connect.json.JsonConverterTest > mapToJsonStringKeys PASSED

org.apache.kafka.connect.json.JsonConverterTest > arrayToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > arrayToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > nullToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > nullToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > timeToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > timeToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > structToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > structToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > 
testConnectSchemaMetadataTranslation STARTED

org.apache.kafka.connect.json.JsonConverterTest > 
testConnectSchemaMetadataTranslation PASSED

org.apache.kafka.connect.json.JsonConverterTest > shortToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > shortToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > dateToConnect STARTED

org.apache.kafka.connect.json.JsonConverterTest > dateToConnect PASSED

org.apache.kafka.connect.json.JsonConverterTest > doubleToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > doubleToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > testStringHeaderToJson STARTED

org.apache.kafka.connect.json.JsonConverterTest > testStringHeaderToJson PASSED

org.apache.kafka.connect.json.JsonConverterTest > decimalToConnectOptional 
STARTED

org.apache.kafka.connect.json.JsonConverterTest > decimalToConnectOptional 
PASSED

org.apache.kafka.connect.json.JsonConverterTest > structSchemaIdentical STARTED

org.apache.kafka.connect.json.JsonConverterTest > structSchemaIdentical PASSED

org.apache.kafka.connect.json.JsonConverterTest > timeToConnectWithDefaultValue 
STARTED


Build failed in Jenkins: kafka-trunk-jdk8 #3672

2019-05-24 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: add Kafka Streams upgrade notes for 2.3 release (#6758)

[jason] MINOR: Fix a few compiler warnings (#6767)

--
[...truncated 2.53 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED


Contributor Apply

2019-05-24 Thread gongfuboych...@gmail.com
hi, 

my name is LiMing Zhou from China, who is java developer. want to be 
contributor to kafka project
JIRA ID: GongFuBoy

thanks 
LiMing Zhou


Build failed in Jenkins: kafka-trunk-jdk11 #568

2019-05-24 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: add Kafka Streams upgrade notes for 2.3 release (#6758)

--
[...truncated 2.48 MB...]
org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldListenForRestoreEvents STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldListenForRestoreEvents PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldThrowProcessorStateStoreExceptionIfStoreCloseFailed STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldThrowProcessorStateStoreExceptionIfStoreCloseFailed PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldRetryWhenPartitionsForThrowsTimeoutException STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldRetryWhenPartitionsForThrowsTimeoutException PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldRestoreRecordsUpToHighwatermark STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldRestoreRecordsUpToHighwatermark PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldDeleteAndRecreateStoreDirectoryOnReinitialize STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldDeleteAndRecreateStoreDirectoryOnReinitialize PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldNotDeleteCheckpointFileAfterLoaded STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldNotDeleteCheckpointFileAfterLoaded PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldInitializeStateStores STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldInitializeStateStores PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldSkipNullKeysWhenRestoring STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldSkipNullKeysWhenRestoring PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldReturnInitializedStoreNames STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldReturnInitializedStoreNames PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldCheckpointRestoredOffsetsToFile STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldCheckpointRestoredOffsetsToFile PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldUnlockGlobalStateDirectoryOnClose STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldUnlockGlobalStateDirectoryOnClose PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldCheckpointOffsets STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldCheckpointOffsets PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldAttemptToCloseAllStoresEvenWhenSomeException STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldAttemptToCloseAllStoresEvenWhenSomeException PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldThrowIllegalArgumentExceptionIfCallbackIsNull STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldThrowIllegalArgumentExceptionIfCallbackIsNull PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldSkipGlobalInMemoryStoreOffsetsToFile STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldSkipGlobalInMemoryStoreOffsetsToFile PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldReadCheckpointOffsets STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldReadCheckpointOffsets PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldCloseStateStores STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldCloseStateStores PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldThrowIllegalArgumentIfTryingToRegisterStoreThatIsNotGlobal STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldThrowIllegalArgumentIfTryingToRegisterStoreThatIsNotGlobal PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldFlushStateStores STARTED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 
shouldFlushStateStores PASSED

org.apache.kafka.streams.processor.internals.GlobalStateManagerImplTest > 

Build failed in Jenkins: kafka-2.3-jdk8 #12

2019-05-24 Thread Apache Jenkins Server
See 


Changes:

[ismael] MINOR: Update jackson to 2.9.9 (#6798)

--
[...truncated 2.79 MB...]

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesExtendedAclChangeEventWhenInterBrokerProtocolAtLeastKafkaV2 PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralAclChangeEventWhenInterBrokerProtocolIsKafkaV2 STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralAclChangeEventWhenInterBrokerProtocolIsKafkaV2 PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnPrefixedResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnPrefixedResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSingleCharacterResourceAcls 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testSingleCharacterResourceAcls 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testEmptyAclThrowsException 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testEmptyAclThrowsException PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testSuperUserWithCustomPrincipalHasAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testSuperUserWithCustomPrincipalHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAllowAccessWithCustomPrincipal STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAllowAccessWithCustomPrincipal PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnWildcardResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDeleteAclOnWildcardResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testChangeListenerTiming STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testChangeListenerTiming PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralWritesLiteralAclChangeEventWhenInterBrokerProtocolLessThanKafkaV2eralAclChangesForOlderProtocolVersions
 STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesLiteralWritesLiteralAclChangeEventWhenInterBrokerProtocolLessThanKafkaV2eralAclChangesForOlderProtocolVersions
 PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testThrowsOnAddPrefixedAclIfInterBrokerProtocolVersionTooLow STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testThrowsOnAddPrefixedAclIfInterBrokerProtocolVersionTooLow PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAccessAllowedIfAllowAclExistsOnPrefixedResource STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAccessAllowedIfAllowAclExistsOnPrefixedResource PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAuthorizeWithEmptyResourceName STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAuthorizeWithEmptyResourceName PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDeleteAllAclOnPrefixedResource STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDeleteAllAclOnPrefixedResource PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAddAclsOnLiteralResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAddAclsOnLiteralResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAuthorizeThrowsOnNoneLiteralResource STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testAuthorizeThrowsOnNoneLiteralResource PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testGetAclsPrincipal STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testGetAclsPrincipal PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAddAclsOnPrefiexedResource 
STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAddAclsOnPrefiexedResource 
PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesExtendedAclChangeEventIfInterBrokerProtocolNotSet STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testWritesExtendedAclChangeEventIfInterBrokerProtocolNotSet PASSED