[GitHub] kafka pull request #4115: KAFKA-6105: load client properties in proper order...

2017-10-23 Thread cnZach
GitHub user cnZach opened a pull request:

https://github.com/apache/kafka/pull/4115

KAFKA-6105: load client properties in proper order for 
kafka.tools.EndToEndLatency

Currently, the property file is loaded first, and later a auto generated 
group.id is used:
consumerProps.put(ConsumerConfig.GROUP_ID_CONFIG, "test-group-" + 
System.currentTimeMillis())
so even user gives the group.id in a property file, it is not picked up.

we need to load client properties in proper order, so that we allow user to 
specify group.id and other properties, excludes only the properties provided in 
the argument list.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/cnZach/kafka cnZach_KAFKA-6105

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4115.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4115


commit d83f84e14c556fffddde2a74469e5be43fc99b10
Author: Yuexin Zhang 
Date:   2017-10-23T07:13:10Z

load client properties in proper order, so that we allow user to sepcify 
group.id and other propeties, excludes only the properties provided in the 
argument list




---


[GitHub] kafka pull request #3512: KAFKA-5574: add thread.id header in show-detailed-...

2017-10-23 Thread cnZach
Github user cnZach closed the pull request at:

https://github.com/apache/kafka/pull/3512


---


Re: [DISCUSS] KIP-204 : adding records deletion operation to the new Admin Client API

2017-10-23 Thread Paolo Patierno
Hi Colin,

I was using the long primitive in the code but not updated the KIP yet, sorry 
... now it's updated !

At same time I agree on using DeletionTarget ... KIP updated !


Regarding the deleteBefore factory method, it's a pattern already used witn 
NewPartitions.increaseTo which I think it's really clear and give us more 
possibility to evolve this DeletionTarget class if we'll add different ways to 
specify such target not only offset based.


Thanks,


Paolo Patierno
Senior Software Engineer (IoT) @ Red Hat
Microsoft MVP on Azure & IoT
Microsoft Azure Advisor

Twitter : @ppatierno
Linkedin : paolopatierno
Blog : DevExperience



From: Colin McCabe 
Sent: Friday, October 20, 2017 8:18 PM
To: dev@kafka.apache.org
Subject: Re: [DISCUSS] KIP-204 : adding records deletion operation to the new 
Admin Client API

> /** Describe records to delete */
 > public class DeleteRecords {
 > private Long offset;

"DeleteRecords" doesn't really explain what the class is, though.  How
about "DeletionTarget"?  Also, why do we need a Long object rather than
a long primitive?

 >
 > /**
 > * Delete all the records before the given {@code offset}
 > */
 > public static DeleteRecords deleteBefore(Long offset) { ... }

This seems confusing to me.  What's wrong with a regular constructor for
DeletionTarget?

best,
Colin


On Fri, Oct 20, 2017, at 01:28, Paolo Patierno wrote:
> Hi all,
>
>
> I have just updated the KIP with your suggestion.
>
> I'm going to continue implementation and tests with these changes,
> waiting for further discussion.
>
>
> Thanks,
>
>
> Paolo Patierno
> Senior Software Engineer (IoT) @ Red Hat
> Microsoft MVP on Azure & IoT
> Microsoft Azure Advisor
>
> Twitter : @ppatierno
> Linkedin : paolopatierno
> Blog : DevExperience
>
>
> 
> From: Paolo Patierno 
> Sent: Thursday, October 19, 2017 1:37 PM
> To: dev@kafka.apache.org
> Subject: Re: [DISCUSS] KIP-204 : adding records deletion operation to the
> new Admin Client API
>
> @Colin Yes you are right I'll update the KIP-204 mentioning the related
> ACL permission DELETE on TOPIC
>
>
> @Dong @Ismael
>
>
> Considering future improvements on this, it makes sense to me using a
> class instead of a Long.
>
> Maybe the name could be just "DeleteRecords" (as "NewPartitions") having
> a deleteBefore(Long) factory method for a simple creation when you need
> to delete before the specified offset.
>
>
> Thanks,
>
>
> Paolo Patierno
> Senior Software Engineer (IoT) @ Red Hat
> Microsoft MVP on Azure & IoT
> Microsoft Azure Advisor
>
> Twitter : @ppatierno
> Linkedin : paolopatierno
> Blog : DevExperience
>
>
> 
> From: Colin McCabe 
> Sent: Wednesday, October 18, 2017 3:58 PM
> To: dev@kafka.apache.org
> Subject: Re: [DISCUSS] KIP-204 : adding records deletion operation to the
> new Admin Client API
>
> Having a class there makes sense to me.  It also helps clarify what the
> Long represents (a record offset).
>
> regards,
> Colin
>
>
> On Wed, Oct 18, 2017, at 06:19, Dong Lin wrote:
> > Sure. This makes sense. I agree it is better to replace Long with a new
> > class.
> >
> > On Wed, Oct 18, 2017 at 6:16 AM, Ismael Juma  wrote:
> >
> > > Hi Dong,
> > >
> > > Yes, I mean replacing the `Long` with a class in the map. The class would
> > > have static factory methods for the various use cases. To use the
> > > `createPartitions` example, there is a `NewPartitions.increaseTo` method.
> > >
> > > Not sure why you think it's too complicated. It provides better type
> > > safety, it's more informative and makes it easier to evolve. Thankfully
> > > Java has lambdas for a while now and mapping a collection from one type to
> > > another is reasonably simple.
> > >
> > > Your suggestion doesn't work because both methods would have the same
> > > "erased" signature. You can't have two overloaded methods that have the
> > > same signature apart from generic parameters. Also, we'd end up with 2
> > > methods in AdminClient.
> > >
> > > Ismael
> > >
> > >
> > > On Wed, Oct 18, 2017 at 1:42 PM, Dong Lin  wrote:
> > >
> > > > Hey Ismael,
> > > >
> > > > To clarify, I think you are saying that we should replace
> > > > "deleteRecords(Map partitionsAndOffsets)" with
> > > > "deleteRecords(Map
> > > > partitionsAndOffsets)", where DeleteRecordsParameter should be include a
> > > > "Long value", and probably "Boolean isBeforeOrAfter" and "Boolean
> > > > isOffsetOrTime" in the future.
> > > >
> > > > I get the point that we want to only include additional parameter
> > > > in DeleteRecordsOptions. I just feel it is a bit overkill to have a new
> 

[GitHub] kafka pull request #4115: KAFKA-6105: load client properties in proper order...

2017-10-23 Thread cnZach
Github user cnZach closed the pull request at:

https://github.com/apache/kafka/pull/4115


---


Re: [DISCUSS] KIP-204 : adding records deletion operation to the new Admin Client API

2017-10-23 Thread Paolo Patierno
About the name I just started to have a doubt about DeletetionTarget because it 
could be bounded to any deletion operation (i.e. delete topic, ...) and not 
just what we want now, so records deletion.

I have updated the KIP-204 using DeleteRecordsTarget so it's clear that it's 
related to the delete records operation and what it means, so the target for 
such operation.


Paolo Patierno
Senior Software Engineer (IoT) @ Red Hat
Microsoft MVP on Azure & IoT
Microsoft Azure Advisor

Twitter : @ppatierno
Linkedin : paolopatierno
Blog : DevExperience



From: Paolo Patierno 
Sent: Monday, October 23, 2017 7:38 AM
To: dev@kafka.apache.org
Subject: Re: [DISCUSS] KIP-204 : adding records deletion operation to the new 
Admin Client API

Hi Colin,

I was using the long primitive in the code but not updated the KIP yet, sorry 
... now it's updated !

At same time I agree on using DeletionTarget ... KIP updated !


Regarding the deleteBefore factory method, it's a pattern already used witn 
NewPartitions.increaseTo which I think it's really clear and give us more 
possibility to evolve this DeletionTarget class if we'll add different ways to 
specify such target not only offset based.


Thanks,


Paolo Patierno
Senior Software Engineer (IoT) @ Red Hat
Microsoft MVP on Azure & IoT
Microsoft Azure Advisor

Twitter : @ppatierno
Linkedin : paolopatierno
Blog : DevExperience



From: Colin McCabe 
Sent: Friday, October 20, 2017 8:18 PM
To: dev@kafka.apache.org
Subject: Re: [DISCUSS] KIP-204 : adding records deletion operation to the new 
Admin Client API

> /** Describe records to delete */
 > public class DeleteRecords {
 > private Long offset;

"DeleteRecords" doesn't really explain what the class is, though.  How
about "DeletionTarget"?  Also, why do we need a Long object rather than
a long primitive?

 >
 > /**
 > * Delete all the records before the given {@code offset}
 > */
 > public static DeleteRecords deleteBefore(Long offset) { ... }

This seems confusing to me.  What's wrong with a regular constructor for
DeletionTarget?

best,
Colin


On Fri, Oct 20, 2017, at 01:28, Paolo Patierno wrote:
> Hi all,
>
>
> I have just updated the KIP with your suggestion.
>
> I'm going to continue implementation and tests with these changes,
> waiting for further discussion.
>
>
> Thanks,
>
>
> Paolo Patierno
> Senior Software Engineer (IoT) @ Red Hat
> Microsoft MVP on Azure & IoT
> Microsoft Azure Advisor
>
> Twitter : @ppatierno
> Linkedin : paolopatierno
> Blog : DevExperience
>
>
> 
> From: Paolo Patierno 
> Sent: Thursday, October 19, 2017 1:37 PM
> To: dev@kafka.apache.org
> Subject: Re: [DISCUSS] KIP-204 : adding records deletion operation to the
> new Admin Client API
>
> @Colin Yes you are right I'll update the KIP-204 mentioning the related
> ACL permission DELETE on TOPIC
>
>
> @Dong @Ismael
>
>
> Considering future improvements on this, it makes sense to me using a
> class instead of a Long.
>
> Maybe the name could be just "DeleteRecords" (as "NewPartitions") having
> a deleteBefore(Long) factory method for a simple creation when you need
> to delete before the specified offset.
>
>
> Thanks,
>
>
> Paolo Patierno
> Senior Software Engineer (IoT) @ Red Hat
> Microsoft MVP on Azure & IoT
> Microsoft Azure Advisor
>
> Twitter : @ppatierno
> Linkedin : paolopatierno
> Blog : DevExperience
>
>
> 
> From: Colin McCabe 
> Sent: Wednesday, October 18, 2017 3:58 PM
> To: dev@kafka.apache.org
> Subject: Re: [DISCUSS] KIP-204 : adding records deletion operation to the
> new Admin Client API
>
> Having a class there makes sense to me.  It also helps clarify what the
> Long represents (a record offset).
>
> regards,
> Colin
>
>
> On Wed, Oct 18, 2017, at 06:19, Dong Lin wrote:
> > Sure. This makes sense. I agree it is better to replace Long with a new
> > class.
> >
> > On Wed, Oct 18, 2017 at 6:16 AM, Ismael Juma  wrote:
> >
> > > Hi Dong,
> > >
> > > Yes, I mean replacing the `Long` with a class in the map. The class would
> > > have static factory methods for the various use cases. To use the
> > > `createPartitions` example, there is a `NewPartitions.increaseTo` method.
> > >
> > > Not sure why you think it's too complicated. It provides better type
> > > safety, it's more informative and makes it easier to evolve. Thankfully
> > > Java has lambdas for a while now and mapping a collection from one type to
> > > another is reasonably simple.
> > >
> > > Yo

[GitHub] kafka pull request #4116: KAFKA-6105: load client properties in proper order...

2017-10-23 Thread cnZach
GitHub user cnZach opened a pull request:

https://github.com/apache/kafka/pull/4116

KAFKA-6105: load client properties in proper order for EndToEndLatency tool

Currently, the property file is loaded first, and later a auto generated 
group.id is used:
`consumerProps.put(ConsumerConfig.GROUP_ID_CONFIG, "test-group-" + 
System.currentTimeMillis())`

so even user gives the group.id in a property file, it is not picked up.

Change it to load client properties in proper order: set default values 
first, then try to load the custom values set in client.properties file.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/cnZach/kafka cnZach_KAFKA-6105

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4116.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4116


commit 99b6ce136d1c4bafa0f74828583d6b4af6cb0785
Author: Yuexin Zhang 
Date:   2017-10-23T08:11:45Z

load client properties in proper order: set default values first, then try 
to load the custom values set in client.properties file




---


Re: [DISCUSS] KIP-204 : adding records deletion operation to the new Admin Client API

2017-10-23 Thread Tom Bentley
At the risk of muddying the waters further, have you considered
"RecordsToDelete" as the name of the class? It's both shorter and more
descriptive imho.

Also "deleteBefore()" as the factory method name isn't very future proof if
we came to support time-based deletion. Something like "beforeOffset()"
would be clearer, imho.

Putting these together: RecordsToDelete.beforeOffset() seems much clearer
to me than DeleteRecordsTarget.deleteBefore()


On 23 October 2017 at 08:45, Paolo Patierno  wrote:

> About the name I just started to have a doubt about DeletetionTarget
> because it could be bounded to any deletion operation (i.e. delete topic,
> ...) and not just what we want now, so records deletion.
>
> I have updated the KIP-204 using DeleteRecordsTarget so it's clear that
> it's related to the delete records operation and what it means, so the
> target for such operation.
>
>
> Paolo Patierno
> Senior Software Engineer (IoT) @ Red Hat
> Microsoft MVP on Azure & IoT
> Microsoft Azure Advisor
>
> Twitter : @ppatierno
> Linkedin : paolopatierno
> Blog : DevExperience
>
>
> 
> From: Paolo Patierno 
> Sent: Monday, October 23, 2017 7:38 AM
> To: dev@kafka.apache.org
> Subject: Re: [DISCUSS] KIP-204 : adding records deletion operation to the
> new Admin Client API
>
> Hi Colin,
>
> I was using the long primitive in the code but not updated the KIP yet,
> sorry ... now it's updated !
>
> At same time I agree on using DeletionTarget ... KIP updated !
>
>
> Regarding the deleteBefore factory method, it's a pattern already used
> witn NewPartitions.increaseTo which I think it's really clear and give us
> more possibility to evolve this DeletionTarget class if we'll add different
> ways to specify such target not only offset based.
>
>
> Thanks,
>
>
> Paolo Patierno
> Senior Software Engineer (IoT) @ Red Hat
> Microsoft MVP on Azure & IoT
> Microsoft Azure Advisor
>
> Twitter : @ppatierno
> Linkedin : paolopatierno
> Blog : DevExperience
>
>
> 
> From: Colin McCabe 
> Sent: Friday, October 20, 2017 8:18 PM
> To: dev@kafka.apache.org
> Subject: Re: [DISCUSS] KIP-204 : adding records deletion operation to the
> new Admin Client API
>
> > /** Describe records to delete */
>  > public class DeleteRecords {
>  > private Long offset;
>
> "DeleteRecords" doesn't really explain what the class is, though.  How
> about "DeletionTarget"?  Also, why do we need a Long object rather than
> a long primitive?
>
>  >
>  > /**
>  > * Delete all the records before the given {@code offset}
>  > */
>  > public static DeleteRecords deleteBefore(Long offset) { ... }
>
> This seems confusing to me.  What's wrong with a regular constructor for
> DeletionTarget?
>
> best,
> Colin
>
>
> On Fri, Oct 20, 2017, at 01:28, Paolo Patierno wrote:
> > Hi all,
> >
> >
> > I have just updated the KIP with your suggestion.
> >
> > I'm going to continue implementation and tests with these changes,
> > waiting for further discussion.
> >
> >
> > Thanks,
> >
> >
> > Paolo Patierno
> > Senior Software Engineer (IoT) @ Red Hat
> > Microsoft MVP on Azure & IoT
> > Microsoft Azure Advisor
> >
> > Twitter : @ppatierno
> > Linkedin : paolopatierno
> > Blog : DevExperience
> >
> >
> > 
> > From: Paolo Patierno 
> > Sent: Thursday, October 19, 2017 1:37 PM
> > To: dev@kafka.apache.org
> > Subject: Re: [DISCUSS] KIP-204 : adding records deletion operation to the
> > new Admin Client API
> >
> > @Colin Yes you are right I'll update the KIP-204 mentioning the related
> > ACL permission DELETE on TOPIC
> >
> >
> > @Dong @Ismael
> >
> >
> > Considering future improvements on this, it makes sense to me using a
> > class instead of a Long.
> >
> > Maybe the name could be just "DeleteRecords" (as "NewPartitions") having
> > a deleteBefore(Long) factory method for a simple creation when you need
> > to delete before the specified offset.
> >
> >
> > Thanks,
> >
> >
> > Paolo Patierno
> > Senior Software Engineer (IoT) @ Red Hat
> > Microsoft MVP on Azure & IoT
> > Microsoft Azure Advisor
> >
> > Twitter : @ppatierno
> > Linkedin : paolopatierno
> > Blog : DevExperience
> >
> >
> > 
> > From: Colin McCabe 
> > Sent: Wednesday, October 18, 2017 3:58 PM
> > To: dev@kafka.apache.org
> > Subject: Re: [DISCUSS] KIP-204 : adding records deletion operation to the
> > new Admin Client API
> >
> > Having a class there makes sense to me.  It also helps clarify what the
> > Long represents (a record offset).
> >
> >

Re: Please subscribe me in your updates

2017-10-23 Thread Tom Bentley
As detailed at https://kafka.apache.org/contact, to subscribe, send an
email to dev-subscr...@kafka.apache.org < dev-subscr...@kafka.apache.org>.



On 22 October 2017 at 15:15, Veeramani S  wrote:

>
>


Metadata class doesn't "expose" topics with errors

2017-10-23 Thread Paolo Patierno
Hi devs,

while developing the KIP-204 (having delete records operation in the "new" 
Admin Client) I'm facing with the following doubt (or maybe a lack of info) ...


As described by KIP-107 (which implements this feature at protocol level and in 
the "legacy" Admin Client), the request needs to be sent to the leader.


For both KIPs, the operation has a Map (offset is a 
long in the "legacy" API but it's becoming to be a class in the "new" API) and 
in order to reduce the number of requests to different leaders, my code groups 
partitions having same leader so having a Map>.


In order to know the leaders I need to request metadata and there are two ways 
for doing that :


  *   using something like the producer does with Metadata class, putting the 
topics, request update and waiting for it
  *   using the low level MetadataRequest and handling the related response 
(which is what the "legacy" API does today)

I noticed that building the Cluster object from the MetadataResponse, the 
topics with errors are skipped and it means that in the final "high level" 
Metadata class (fetching the Cluster object) there is no information about 
them. So with first solution we have no info about topics with errors (maybe 
the only errors I'm able to handle is the "LEADER_NOT_AVAILABLE", if 
leaderFor() on the Cluster returns a null Node).

Is there any specific reason why "topics with errors" are not exposed in the 
Metadata instance ?
Is the preferred pattern using the low level protocol stuff in such case ?

Thanks


Paolo Patierno
Senior Software Engineer (IoT) @ Red Hat
Microsoft MVP on Azure & IoT
Microsoft Azure Advisor

Twitter : @ppatierno
Linkedin : paolopatierno
Blog : DevExperience


Re: Metadata class doesn't "expose" topics with errors

2017-10-23 Thread Paolo Patierno
I'd like just to add that in my specific case I could resolve creating a new 
NodeProvider implementation like LeaderForProvider in order to send the request 
to the right leader for each topic.

The drawback of this solution is not grouping topics for the same leader so not 
having only one requests but different of them (even if for the same leader).


Paolo Patierno
Senior Software Engineer (IoT) @ Red Hat
Microsoft MVP on Azure & IoT
Microsoft Azure Advisor

Twitter : @ppatierno
Linkedin : paolopatierno
Blog : DevExperience



From: Paolo Patierno 
Sent: Monday, October 23, 2017 9:06 AM
To: dev@kafka.apache.org
Subject: Metadata class doesn't "expose" topics with errors

Hi devs,

while developing the KIP-204 (having delete records operation in the "new" 
Admin Client) I'm facing with the following doubt (or maybe a lack of info) ...


As described by KIP-107 (which implements this feature at protocol level and in 
the "legacy" Admin Client), the request needs to be sent to the leader.


For both KIPs, the operation has a Map (offset is a 
long in the "legacy" API but it's becoming to be a class in the "new" API) and 
in order to reduce the number of requests to different leaders, my code groups 
partitions having same leader so having a Map>.


In order to know the leaders I need to request metadata and there are two ways 
for doing that :


  *   using something like the producer does with Metadata class, putting the 
topics, request update and waiting for it
  *   using the low level MetadataRequest and handling the related response 
(which is what the "legacy" API does today)

I noticed that building the Cluster object from the MetadataResponse, the 
topics with errors are skipped and it means that in the final "high level" 
Metadata class (fetching the Cluster object) there is no information about 
them. So with first solution we have no info about topics with errors (maybe 
the only errors I'm able to handle is the "LEADER_NOT_AVAILABLE", if 
leaderFor() on the Cluster returns a null Node).

Is there any specific reason why "topics with errors" are not exposed in the 
Metadata instance ?
Is the preferred pattern using the low level protocol stuff in such case ?

Thanks


Paolo Patierno
Senior Software Engineer (IoT) @ Red Hat
Microsoft MVP on Azure & IoT
Microsoft Azure Advisor

Twitter : @ppatierno
Linkedin : paolopatierno
Blog : DevExperience


[GitHub] kafka pull request #4104: MINOR: add hint for setting an uncaught exception ...

2017-10-23 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4104


---


Re: Metadata class doesn't "expose" topics with errors

2017-10-23 Thread Paolo Patierno
Finally another plan could be to use nesting of runnable calls.

The first one for asking metadata (using the MetadataRequest which provides us 
all the errors) and then sending the delete records requests in the 
handleResponse() of such metadata request.


Paolo Patierno
Senior Software Engineer (IoT) @ Red Hat
Microsoft MVP on Azure & IoT
Microsoft Azure Advisor

Twitter : @ppatierno
Linkedin : paolopatierno
Blog : DevExperience



From: Paolo Patierno 
Sent: Monday, October 23, 2017 9:06 AM
To: dev@kafka.apache.org
Subject: Metadata class doesn't "expose" topics with errors

Hi devs,

while developing the KIP-204 (having delete records operation in the "new" 
Admin Client) I'm facing with the following doubt (or maybe a lack of info) ...


As described by KIP-107 (which implements this feature at protocol level and in 
the "legacy" Admin Client), the request needs to be sent to the leader.


For both KIPs, the operation has a Map (offset is a 
long in the "legacy" API but it's becoming to be a class in the "new" API) and 
in order to reduce the number of requests to different leaders, my code groups 
partitions having same leader so having a Map>.


In order to know the leaders I need to request metadata and there are two ways 
for doing that :


  *   using something like the producer does with Metadata class, putting the 
topics, request update and waiting for it
  *   using the low level MetadataRequest and handling the related response 
(which is what the "legacy" API does today)

I noticed that building the Cluster object from the MetadataResponse, the 
topics with errors are skipped and it means that in the final "high level" 
Metadata class (fetching the Cluster object) there is no information about 
them. So with first solution we have no info about topics with errors (maybe 
the only errors I'm able to handle is the "LEADER_NOT_AVAILABLE", if 
leaderFor() on the Cluster returns a null Node).

Is there any specific reason why "topics with errors" are not exposed in the 
Metadata instance ?
Is the preferred pattern using the low level protocol stuff in such case ?

Thanks


Paolo Patierno
Senior Software Engineer (IoT) @ Red Hat
Microsoft MVP on Azure & IoT
Microsoft Azure Advisor

Twitter : @ppatierno
Linkedin : paolopatierno
Blog : DevExperience


Jenkins build is back to normal : kafka-trunk-jdk9 #144

2017-10-23 Thread Apache Jenkins Server
See 




[GitHub] kafka pull request #4102: MINOR: Update Scala to 2.12.4

2017-10-23 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4102


---


[GitHub] kafka pull request #4117: MINOR: Configure owasp.dependencycheck gradle plug...

2017-10-23 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/4117

MINOR: Configure owasp.dependencycheck gradle plugin

It seems to output a few false positives, but still
worth verifying.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka dependency-check

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4117.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4117


commit 9a4860f94c10685a1b88592e3248ae460072753d
Author: Ismael Juma 
Date:   2017-10-23T10:49:07Z

MINOR: Configure owasp.dependencycheck gradle plugin

It seems to output a few false positives, but still
worth verifying.




---


Build failed in Jenkins: kafka-trunk-jdk8 #2158

2017-10-23 Thread Apache Jenkins Server
See 


Changes:

[damian.guy] MINOR: add hint for setting an uncaught exception handler to 
JavaDocs

--
[...truncated 3.31 MB...]

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache PASSED

kafka.integration.PrimitiveApiTest > testMultiProduce STARTED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch STARTED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize 
STARTED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests STARTED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch STARTED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch PASSED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression STARTED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression PASSED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic STARTED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic PASSED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest STARTED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest PASSED

kafka.integration.MetricsDuringTopicCreationDeletionTest > 
testMetricsDuringTopicCreateDelete STARTED

kafka.integration.MetricsDuringTopicCreationDeletionTest > 
testMetricsDuringTopicCreateDelete PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
PASSED

kafka.integration.TopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack 
STARTED

kafka.integration.TopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack 
PASSED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithCollision STARTED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithCollision PASSED

kafka.integration.TopicMetadataTest > testAliveBrokerListWithNoTopics STARTED

kafka.integration.TopicMetadataTest > testAliveBrokerListWithNoTopics PASSED

kafka.integration.TopicMetadataTest > testAutoCreateTopic STARTED

kafka.integration.TopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.TopicMetadataTest > testGetAllTopicMetadata STARTED

kafka.integration.TopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.TopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup STARTED

kafka.integration.TopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.TopicMetadataTest > testBasicTopicMetadata STARTED

kafka.integration.TopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithInvalidReplication 
STARTED

kaf

Request for Review: KAFKA-4930

2017-10-23 Thread Sönke Liebau
Hi everybody,

could one of the committers please have a quick look at my PR [1] for
KAFKA-4930?
I think I've addressed all comments I received so far and its been sitting
for a while ready for review.

Kind regards,
Sönke

[1]
https://github.com/apache/kafka/pull/2755


Build failed in Jenkins: kafka-trunk-jdk8 #2159

2017-10-23 Thread Apache Jenkins Server
See 


Changes:

[ismael] MINOR: Update Scala to 2.12.4

--
[...truncated 381.29 KB...]
kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testDistributedConcurrentModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache PASSED

kafka.integration.PrimitiveApiTest > testMultiProduce STARTED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch STARTED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize 
STARTED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests STARTED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch STARTED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch PASSED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression STARTED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression PASSED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic STARTED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic PASSED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest STARTED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest PASSED

kafka.integration.MetricsDuringTopicCreationDeletionTest > 
testMetricsDuringTopicCreateDelete STARTED

kafka.integration.MetricsDuringTopicCreationDeletionTest > 
testMetricsDuringTopicCreateDelete PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
PASSED

kafka.integration.TopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack 
STARTED

kafka.integration.TopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack 
PASSED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithCollision STARTED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithCollision PASSED

kafka.integration.TopicMetadataTest > testAliveBrokerListWithNoTopics STARTED

kafka.integration.TopicMetadataTest > testAliveBrokerListWithNoTopics PASSED

kafka.integration.TopicMetadataTest > testAutoCreateTopic STARTED

kafka.integration.TopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.TopicMetadataTest > testGetAllTopicMetadata STARTED

kafka.integration.TopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.TopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup STARTED

kafka.integration.TopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.TopicMetadataTest > testBasicTopicMetadata STARTED

kafka.integration.TopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.TopicMetadataTest > testAutoCreateTopicWithInvalidReplication 
STARTED

kafka.integration.TopicMetadataTest > testAutoC

[GitHub] kafka pull request #4113: KAFKA-6104: Added unit tests for ClusterConnection...

2017-10-23 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4113


---


[GitHub] kafka pull request #4108: KAFKA-6101 Reconnecting to broker does not exponen...

2017-10-23 Thread tedyu
Github user tedyu closed the pull request at:

https://github.com/apache/kafka/pull/4108


---


[GitHub] kafka pull request #4117: MINOR: Configure owasp.dependencycheck gradle plug...

2017-10-23 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4117


---


[GitHub] kafka pull request #4118: KAFKA-6101 Reconnecting to broker does not exponen...

2017-10-23 Thread tedyu
GitHub user tedyu opened a pull request:

https://github.com/apache/kafka/pull/4118

KAFKA-6101 Reconnecting to broker does not exponentially backoff



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tedyu/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4118.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4118


commit cd928d3867e42774bb167b3aaf11ca6a8dd8d48f
Author: tedyu 
Date:   2017-10-23T13:49:27Z

KAFKA-6101 Reconnecting to broker does not exponentially backoff




---


[GitHub] kafka pull request #4119: MINOR: added -1 value description as "high waterma...

2017-10-23 Thread ppatierno
GitHub user ppatierno opened a pull request:

https://github.com/apache/kafka/pull/4119

MINOR: added -1 value description as "high watermark" in the protocol 
delete records request 



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ppatierno/kafka minor-delrecords-prot

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4119.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4119


commit 79fa60f4d5f3f95b94c3bbd86621478763922f1d
Author: Paolo Patierno 
Date:   2017-10-23T13:57:50Z

Added "high watermark" meaning with -1 value for the partition offset 
during delete records




---


Build failed in Jenkins: kafka-trunk-jdk8 #2160

2017-10-23 Thread Apache Jenkins Server
See 


Changes:

[ismael] KAFKA-6104; Added unit tests for ClusterConnectionStates

--
[...truncated 2.92 MB...]

org.apache.kafka.common.security.plain.PlainSaslServerTest > 
noAuthorizationIdSpecified STARTED

org.apache.kafka.common.security.plain.PlainSaslServerTest > 
noAuthorizationIdSpecified PASSED

org.apache.kafka.common.security.plain.PlainSaslServerTest > 
authorizatonIdEqualsAuthenticationId STARTED

org.apache.kafka.common.security.plain.PlainSaslServerTest > 
authorizatonIdEqualsAuthenticationId PASSED

org.apache.kafka.common.security.plain.PlainSaslServerTest > 
authorizatonIdNotEqualsAuthenticationId STARTED

org.apache.kafka.common.security.plain.PlainSaslServerTest > 
authorizatonIdNotEqualsAuthenticationId PASSED

org.apache.kafka.common.security.authenticator.ClientAuthenticationFailureTest 
> testProducerWithInvalidCredentials STARTED

org.apache.kafka.common.security.authenticator.ClientAuthenticationFailureTest 
> testProducerWithInvalidCredentials PASSED

org.apache.kafka.common.security.authenticator.ClientAuthenticationFailureTest 
> testTransactionalProducerWithInvalidCredentials STARTED

org.apache.kafka.common.security.authenticator.ClientAuthenticationFailureTest 
> testTransactionalProducerWithInvalidCredentials PASSED

org.apache.kafka.common.security.authenticator.ClientAuthenticationFailureTest 
> testConsumerWithInvalidCredentials STARTED

org.apache.kafka.common.security.authenticator.ClientAuthenticationFailureTest 
> testConsumerWithInvalidCredentials PASSED

org.apache.kafka.common.security.authenticator.ClientAuthenticationFailureTest 
> testAdminClientWithInvalidCredentials STARTED

org.apache.kafka.common.security.authenticator.ClientAuthenticationFailureTest 
> testAdminClientWithInvalidCredentials PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testMissingUsernameSaslPlain STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testMissingUsernameSaslPlain PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testValidSaslScramMechanisms STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testValidSaslScramMechanisms PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslScramSslServerWithoutSaslAuthenticateHeaderFailure STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslScramSslServerWithoutSaslAuthenticateHeaderFailure PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslScramPlaintextServerWithoutSaslAuthenticateHeaderFailure STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslScramPlaintextServerWithoutSaslAuthenticateHeaderFailure PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslScramPlaintextServerWithoutSaslAuthenticateHeader STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslScramPlaintextServerWithoutSaslAuthenticateHeader PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testMechanismPluggability STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testMechanismPluggability PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testScramUsernameWithSpecialCharacters STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testScramUsernameWithSpecialCharacters PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testApiVersionsRequestWithUnsupportedVersion STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testApiVersionsRequestWithUnsupportedVersion PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testMissingPasswordSaslPlain STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testMissingPasswordSaslPlain PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testInvalidLoginModule STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testInvalidLoginModule PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslPlainPlaintextClientWithoutSaslAuthenticateHeader STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslPlainPlaintextClientWithoutSaslAuthenticateHeader PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslPlainSslClientWithoutSaslAuthenticateHeader STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslPlainSslClientWithoutSaslAuthenticateHeader PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslPlainSslClientWithoutSaslAuthenticateHeaderFailure STARTED

org.apache

[GitHub] kafka pull request #4120: Make CONSOLE_OUTPUT_FILE configurable

2017-10-23 Thread rnpridgeon
GitHub user rnpridgeon opened a pull request:

https://github.com/apache/kafka/pull/4120

Make CONSOLE_OUTPUT_FILE configurable

Even if Kafka is not configured without the console appender the 
CONSOLE_OUTPUT_FILE is generated. Allowing for this to be configurable offers 
more flexibility including redirecting `nohup.out` or the resulting redirect to 
`/dev/null`

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rnpridgeon/kafka patch-1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4120.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4120


commit 7f675114c7d9b335ab38066f411a06735f74fdf3
Author: Ryan P 
Date:   2017-10-23T14:45:57Z

Make CONSOLE_OUTPUT_FILE configurable 

Even if Kafka is not configured without the console appender the 
CONSOLE_OUTPUT_FILE is generated. Allowing for this to be configurable offers 
more flexibility including redirecting `nohup.out` or the resulting redirect to 
`/dev/null`




---


Re: Before creating KIP : Kafka Connect / Add a configuration provider class

2017-10-23 Thread Randall Hauch
Very interesting. Would the proposed configuration provider be set at the
connector level or the worker level? The latter would obviously be required
to handle all/multiple connector configurations. Either way, the provider
class(es) would need to be installed on the worker (really, every worker),
correct?

Would all provider implementations be custom implementations, or are there
some provider implementations that are general enough for Connect to
include them?

Best regards,

Randall

On Fri, Oct 20, 2017 at 5:08 AM, Florian Hussonnois 
wrote:

> Hi Team
>
> Before submitting a new KIP I would like to open the discussion regarding
> an enhancement of Kafka Connect.
>
> Currently, the only way to configure a connector (in distributed mode) is
> through REST endpoints while creating or updating a connector.
>
> It would be nice to have the possibility to specify a configs provider
> class (as we specify the connector class) in the JSON payload sent over the
> REST API.
> This class would be called during the connector creation to complete the
> configs submitted via REST.
>
> The motivations for a such functionality is for example to enforce a
> configuration for all deployed connectors, to provide default configs or to
> provide sensitive configs like user/password.
>
> I've met these requirements on different projects.
>
> Do you think, this feature merits a new KIP ?
>
> Thanks,
>
> --
> Florian HUSSONNOIS
>


[GitHub] kafka pull request #4118: KAFKA-6101 Reconnecting to broker does not exponen...

2017-10-23 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4118


---


Re: Before creating KIP : Kafka Connect / Add a configuration provider class

2017-10-23 Thread Sönke Liebau
I agree, sounds like an intriguing idea. I think we could probably come up
with a few common enough implementations that merit including in Kafka.
FileConfigProvider for example, so you can distribute common configs
throughout your cluster with some orchestration tool and users simply state
the identifier of some connection..

What I am wondering is, whether these classes would always return an entire
configuration, or is it a more specific approach where you might use a
FileConfigProvider to retrieve some hostname and some other ConfigProvider
to retrieve credentials, etc...



On Mon, Oct 23, 2017 at 5:12 PM, Randall Hauch  wrote:

> Very interesting. Would the proposed configuration provider be set at the
> connector level or the worker level? The latter would obviously be required
> to handle all/multiple connector configurations. Either way, the provider
> class(es) would need to be installed on the worker (really, every worker),
> correct?
>
> Would all provider implementations be custom implementations, or are there
> some provider implementations that are general enough for Connect to
> include them?
>
> Best regards,
>
> Randall
>
> On Fri, Oct 20, 2017 at 5:08 AM, Florian Hussonnois  >
> wrote:
>
> > Hi Team
> >
> > Before submitting a new KIP I would like to open the discussion regarding
> > an enhancement of Kafka Connect.
> >
> > Currently, the only way to configure a connector (in distributed mode) is
> > through REST endpoints while creating or updating a connector.
> >
> > It would be nice to have the possibility to specify a configs provider
> > class (as we specify the connector class) in the JSON payload sent over
> the
> > REST API.
> > This class would be called during the connector creation to complete the
> > configs submitted via REST.
> >
> > The motivations for a such functionality is for example to enforce a
> > configuration for all deployed connectors, to provide default configs or
> to
> > provide sensitive configs like user/password.
> >
> > I've met these requirements on different projects.
> >
> > Do you think, this feature merits a new KIP ?
> >
> > Thanks,
> >
> > --
> > Florian HUSSONNOIS
> >
>



-- 
Sönke Liebau
Partner
Tel. +49 179 7940878
OpenCore GmbH & Co. KG - Thomas-Mann-Straße 8 - 22880 Wedel - Germany


[jira] [Resolved] (KAFKA-6101) Reconnecting to broker does not exponentially backoff

2017-10-23 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-6101.

Resolution: Fixed
  Assignee: Ted Yu

> Reconnecting to broker does not exponentially backoff
> -
>
> Key: KAFKA-6101
> URL: https://issues.apache.org/jira/browse/KAFKA-6101
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.11.0.0
>Reporter: Sean Rohead
>Assignee: Ted Yu
> Fix For: 1.0.0, 1.1.0
>
> Attachments: 6101.v2.txt, 6101.v3.txt, text.html
>
>
> I am using com.typesafe.akka:akka-stream-kafka:0.17 which relies on 
> kafka-clients:0.11.0.0.
> I have set the reconnect.backoff.max.ms property to 6.
> When I start the application without kafka running, I see a flood of the 
> following log message:
> [warn] o.a.k.c.NetworkClient - Connection to node -1 could not be 
> established. Broker may not be available.
> The log messages occur several times a second and the frequency of these 
> messages does not decrease over time as would be expected if exponential 
> backoff was working properly.
> I set a breakpoint in the debugger in ClusterConnectionStates:188 and noticed 
> that every time this breakpoint is hit, nodeState.failedAttempts is always 0. 
> This is why the delay does not increase exponentially. It also appears that 
> every time the breakpoint is hit, it is on a different instance, so even 
> though the number of failedAttempts is incremented, we never get the 
> breakpoint for the same instance more than one time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #4084: KAFKA-6070: add ipaddress and enum34 dependencies ...

2017-10-23 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4084


---


[jira] [Resolved] (KAFKA-5743) All ducktape services should store their files in subdirectories of /mnt

2017-10-23 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-5743.

   Resolution: Fixed
Fix Version/s: 1.0.0

> All ducktape services should store their files in subdirectories of /mnt
> 
>
> Key: KAFKA-5743
> URL: https://issues.apache.org/jira/browse/KAFKA-5743
> Project: Kafka
>  Issue Type: Improvement
>  Components: system tests
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
> Fix For: 1.0.0
>
>
> Currently, some ducktape services like KafkaService store their files 
> directly in /mnt.  This means that cleanup involves running {{rm -rf 
> /mnt/*}}.  It would be better if services stored their files in 
> subdirectories of mount.  For example, KafkaService could store its files in 
> /mnt/kafka.  This would make cleanup simpler and avoid the need to remove all 
> of /mnt.  It would also make running multiple services on the same node 
> possible.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5720) In Jenkins, kafka.api.SaslSslAdminClientIntegrationTest failed with org.apache.kafka.common.errors.TimeoutException

2017-10-23 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-5720.

   Resolution: Fixed
Fix Version/s: (was: 1.1.0)
   1.0.0

> In Jenkins, kafka.api.SaslSslAdminClientIntegrationTest failed with 
> org.apache.kafka.common.errors.TimeoutException
> ---
>
> Key: KAFKA-5720
> URL: https://issues.apache.org/jira/browse/KAFKA-5720
> Project: Kafka
>  Issue Type: Bug
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
>Priority: Minor
> Fix For: 1.0.0
>
>
> In Jenkins, kafka.api.SaslSslAdminClientIntegrationTest failed with 
> org.apache.kafka.common.errors.TimeoutException.
> {code}
> java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.TimeoutException: Aborted due to timeout.
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:213)
>   at 
> kafka.api.AdminClientIntegrationTest.testCallInFlightTimeouts(AdminClientIntegrationTest.scala:399)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.kafka.common.errors.TimeoutException: Aborted due to 
> timeout.
> {code}
> It's unclear whether this was an environment error or test bug.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-6070) ducker-ak: add ipaddress and enum34 dependencies to docker image

2017-10-23 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-6070.

   Resolution: Fixed
Fix Version/s: 1.1.0
   1.0.0

> ducker-ak: add ipaddress and enum34 dependencies to docker image
> 
>
> Key: KAFKA-6070
> URL: https://issues.apache.org/jira/browse/KAFKA-6070
> Project: Kafka
>  Issue Type: Bug
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
> Fix For: 1.0.0, 1.1.0
>
>
> ducker-ak: add ipaddress and enum34 dependencies to docker image



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Build failed in Jenkins: kafka-trunk-jdk9 #147

2017-10-23 Thread Apache Jenkins Server
See 


Changes:

[ismael] MINOR: Configure owasp.dependencycheck gradle plugin

--
[...truncated 1.90 MB...]
org.apache.kafka.connect.runtime.isolation.PluginUtilsTest > 
testPluginUrlsWithAbsoluteSymlink STARTED

org.apache.kafka.connect.runtime.isolation.PluginUtilsTest > 
testPluginUrlsWithAbsoluteSymlink PASSED

org.apache.kafka.connect.runtime.isolation.PluginUtilsTest > 
testEmptyStructurePluginUrls STARTED

org.apache.kafka.connect.runtime.isolation.PluginUtilsTest > 
testEmptyStructurePluginUrls PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testStartStop STARTED

org.apache.kafka.connect.util.KafkaBasedLogTest > testStartStop PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testReloadOnStart STARTED

org.apache.kafka.connect.util.KafkaBasedLogTest > testReloadOnStart PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > 
testReloadOnStartWithNoNewRecordsPresent STARTED

org.apache.kafka.connect.util.KafkaBasedLogTest > 
testReloadOnStartWithNoNewRecordsPresent PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testSendAndReadToEnd STARTED

org.apache.kafka.connect.util.KafkaBasedLogTest > testSendAndReadToEnd PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testConsumerError STARTED

org.apache.kafka.connect.util.KafkaBasedLogTest > testConsumerError PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testProducerError STARTED

org.apache.kafka.connect.util.KafkaBasedLogTest > testProducerError PASSED

org.apache.kafka.connect.util.ShutdownableThreadTest > testGracefulShutdown 
STARTED

org.apache.kafka.connect.util.ShutdownableThreadTest > testGracefulShutdown 
PASSED

org.apache.kafka.connect.util.ShutdownableThreadTest > testForcibleShutdown 
STARTED

org.apache.kafka.connect.util.ShutdownableThreadTest > testForcibleShutdown 
PASSED

org.apache.kafka.connect.util.TopicAdminTest > 
shouldNotCreateTopicWhenItAlreadyExists STARTED

org.apache.kafka.connect.util.TopicAdminTest > 
shouldNotCreateTopicWhenItAlreadyExists PASSED

org.apache.kafka.connect.util.TopicAdminTest > 
shouldCreateTopicWhenItDoesNotExist STARTED

org.apache.kafka.connect.util.TopicAdminTest > 
shouldCreateTopicWhenItDoesNotExist PASSED

org.apache.kafka.connect.util.TopicAdminTest > 
shouldReturnFalseWhenSuppliedNullTopicDescription STARTED

org.apache.kafka.connect.util.TopicAdminTest > 
shouldReturnFalseWhenSuppliedNullTopicDescription PASSED

org.apache.kafka.connect.util.TopicAdminTest > returnNullWithApiVersionMismatch 
STARTED

org.apache.kafka.connect.util.TopicAdminTest > returnNullWithApiVersionMismatch 
PASSED

org.apache.kafka.connect.util.TopicAdminTest > 
shouldCreateOneTopicWhenProvidedMultipleDefinitionsWithSameTopicName STARTED

org.apache.kafka.connect.util.TopicAdminTest > 
shouldCreateOneTopicWhenProvidedMultipleDefinitionsWithSameTopicName PASSED

org.apache.kafka.connect.util.TableTest > basicOperations STARTED

org.apache.kafka.connect.util.TableTest > basicOperations PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testPutConnectorConfig STARTED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > testStartStop 
STARTED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > testStartStop 
PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testPutTaskConfigs STARTED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testPutTaskConfigs PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testPutTaskConfigsZeroTasks STARTED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testPutTaskConfigsZeroTasks PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testRestoreTargetState STARTED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testRestoreTargetState PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testBackgroundUpdateTargetState STARTED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testBackgroundUpdateTargetState PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testBackgroundConnectorDeletion STARTED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testBackgroundConnectorDeletion PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testRestoreTargetStateUnexpectedDeletion STARTED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testRestoreTargetStateUnexpectedDeletion PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > testRestore 
STARTED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > testRestore 
PASSED

org.apache.kafka.connect.storage.KafkaConfigBackingStoreTest > 
testRestoreConnectorDeletion STARTED

org.apache.kafka.connect.storage.KafkaConfigB

Build failed in Jenkins: kafka-trunk-jdk7 #2916

2017-10-23 Thread Apache Jenkins Server
See 


Changes:

[ismael] MINOR: Configure owasp.dependencycheck gradle plugin

--
[...truncated 1.84 MB...]
org.apache.kafka.streams.KafkaStreamsTest > testStateGlobalThreadClose PASSED

org.apache.kafka.streams.KafkaStreamsTest > 
shouldThrowExceptionSettingUncaughtExceptionHandlerNotInCreateState STARTED

org.apache.kafka.streams.KafkaStreamsTest > 
shouldThrowExceptionSettingUncaughtExceptionHandlerNotInCreateState PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCleanup STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCleanup PASSED

org.apache.kafka.streams.KafkaStreamsTest > 
shouldThrowExceptionSettingStateListenerNotInCreateState STARTED

org.apache.kafka.streams.KafkaStreamsTest > 
shouldThrowExceptionSettingStateListenerNotInCreateState PASSED

org.apache.kafka.streams.KafkaStreamsTest > testNumberDefaultMetrics STARTED

org.apache.kafka.streams.KafkaStreamsTest > testNumberDefaultMetrics PASSED

org.apache.kafka.streams.KafkaStreamsTest > shouldReturnThreadMetadata STARTED

org.apache.kafka.streams.KafkaStreamsTest > shouldReturnThreadMetadata PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCloseIsIdempotent STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCloseIsIdempotent PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCannotCleanupWhileRunning 
STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCannotCleanupWhileRunning PASSED

org.apache.kafka.streams.KafkaStreamsTest > testStateThreadClose STARTED

org.apache.kafka.streams.KafkaStreamsTest > testStateThreadClose PASSED

org.apache.kafka.streams.KafkaStreamsTest > testStateChanges STARTED

org.apache.kafka.streams.KafkaStreamsTest > testStateChanges PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCannotStartTwice STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCannotStartTwice PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfProducerEnableIdempotenceIsOverriddenIfEosEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfProducerEnableIdempotenceIsOverriddenIfEosEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > shouldUseNewConfigsWhenPresent 
STARTED

org.apache.kafka.streams.StreamsConfigTest > shouldUseNewConfigsWhenPresent 
PASSED

org.apache.kafka.streams.StreamsConfigTest > testGetConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > testGetConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > shouldAcceptAtLeastOnce STARTED

org.apache.kafka.streams.StreamsConfigTest > shouldAcceptAtLeastOnce PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldUseCorrectDefaultsWhenNoneSpecified STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldUseCorrectDefaultsWhenNoneSpecified PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerEnableIdempotenceIfEosDisabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerEnableIdempotenceIfEosDisabled PASSED

org.apache.kafka.streams.StreamsConfigTest > defaultSerdeShouldBeConfigured 
STARTED

org.apache.kafka.streams.StreamsConfigTest > defaultSerdeShouldBeConfigured 
PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSetDifferentDefaultsIfEosEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSetDifferentDefaultsIfEosEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldNotOverrideUserConfigRetriesIfExactlyOnceEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldNotOverrideUserConfigRetriesIfExactlyOnceEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > testGetProducerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > testGetProducerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerMaxInFlightRequestPerConnectionsWhenEosDisabled 
STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerMaxInFlightRequestPerConnectionsWhenEosDisabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowStreamsExceptionIfValueSerdeConfigFails STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowStreamsExceptionIfValueSerdeConfigFails PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfConsumerAutoCommitIsOverridden STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfConsumerAutoCommitIsOverridden PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowExceptionIfNotAtLestOnceOrExactlyO

try to subscribe

2017-10-23 Thread 赖志滨



Build failed in Jenkins: kafka-1.0-jdk7 #52

2017-10-23 Thread Apache Jenkins Server
See 


Changes:

[ismael] KAFKA-6101; Reconnecting to broker does not exponentially backoff

--
[...truncated 372.24 KB...]

kafka.server.ServerGenerateBrokerIdTest > testUserConfigAndGeneratedBrokerId 
STARTED

kafka.server.ServerGenerateBrokerIdTest > testUserConfigAndGeneratedBrokerId 
PASSED

kafka.server.ServerGenerateBrokerIdTest > 
testConsistentBrokerIdFromUserConfigAndMetaProps STARTED

kafka.server.ServerGenerateBrokerIdTest > 
testConsistentBrokerIdFromUserConfigAndMetaProps PASSED

kafka.server.DelayedOperationTest > testRequestPurge STARTED

kafka.server.DelayedOperationTest > testRequestPurge PASSED

kafka.server.DelayedOperationTest > testRequestExpiry STARTED

kafka.server.DelayedOperationTest > testRequestExpiry PASSED

kafka.server.DelayedOperationTest > 
shouldReturnNilOperationsOnCancelForKeyWhenKeyDoesntExist STARTED

kafka.server.DelayedOperationTest > 
shouldReturnNilOperationsOnCancelForKeyWhenKeyDoesntExist PASSED

kafka.server.DelayedOperationTest > testDelayedOperationLockOverride STARTED

kafka.server.DelayedOperationTest > testDelayedOperationLockOverride PASSED

kafka.server.DelayedOperationTest > 
shouldCancelForKeyReturningCancelledOperations STARTED

kafka.server.DelayedOperationTest > 
shouldCancelForKeyReturningCancelledOperations PASSED

kafka.server.DelayedOperationTest > testRequestSatisfaction STARTED

kafka.server.DelayedOperationTest > testRequestSatisfaction PASSED

kafka.server.DelayedOperationTest > testDelayedOperationLock STARTED

kafka.server.DelayedOperationTest > testDelayedOperationLock PASSED

kafka.server.MultipleListenersWithDefaultJaasContextTest > testProduceConsume 
STARTED

kafka.server.MultipleListenersWithDefaultJaasContextTest > testProduceConsume 
PASSED

kafka.server.ThrottledResponseExpirationTest > testThrottledRequest STARTED

kafka.server.ThrottledResponseExpirationTest > testThrottledRequest PASSED

kafka.server.ThrottledResponseExpirationTest > testExpire STARTED

kafka.server.ThrottledResponseExpirationTest > testExpire PASSED

kafka.server.ReplicaManagerQuotasTest > shouldGetBothMessagesIfQuotasAllow 
STARTED

kafka.server.ReplicaManagerQuotasTest > shouldGetBothMessagesIfQuotasAllow 
PASSED

kafka.server.ReplicaManagerQuotasTest > 
shouldExcludeSubsequentThrottledPartitions STARTED

kafka.server.ReplicaManagerQuotasTest > 
shouldExcludeSubsequentThrottledPartitions PASSED

kafka.server.ReplicaManagerQuotasTest > 
shouldGetNoMessagesIfQuotasExceededOnSubsequentPartitions STARTED

kafka.server.ReplicaManagerQuotasTest > 
shouldGetNoMessagesIfQuotasExceededOnSubsequentPartitions PASSED

kafka.server.ReplicaManagerQuotasTest > shouldIncludeInSyncThrottledReplicas 
STARTED

kafka.server.ReplicaManagerQuotasTest > shouldIncludeInSyncThrottledReplicas 
PASSED

kafka.server.SaslApiVersionsRequestTest > 
testApiVersionsRequestWithUnsupportedVersion STARTED

kafka.server.SaslApiVersionsRequestTest > 
testApiVersionsRequestWithUnsupportedVersion PASSED

kafka.server.SaslApiVersionsRequestTest > 
testApiVersionsRequestBeforeSaslHandshakeRequest STARTED

kafka.server.SaslApiVersionsRequestTest > 
testApiVersionsRequestBeforeSaslHandshakeRequest PASSED

kafka.server.SaslApiVersionsRequestTest > 
testApiVersionsRequestAfterSaslHandshakeRequest STARTED

kafka.server.SaslApiVersionsRequestTest > 
testApiVersionsRequestAfterSaslHandshakeRequest PASSED

kafka.server.ControlledShutdownLeaderSelectorTest > testSelectLeader STARTED

kafka.server.ControlledShutdownLeaderSelectorTest > testSelectLeader PASSED

kafka.server.MetadataCacheTest > 
getTopicMetadataWithNonSupportedSecurityProtocol STARTED

kafka.server.MetadataCacheTest > 
getTopicMetadataWithNonSupportedSecurityProtocol PASSED

kafka.server.MetadataCacheTest > getTopicMetadataIsrNotAvailable STARTED

kafka.server.MetadataCacheTest > getTopicMetadataIsrNotAvailable PASSED

kafka.server.MetadataCacheTest > getTopicMetadata STARTED

kafka.server.MetadataCacheTest > getTopicMetadata PASSED

kafka.server.MetadataCacheTest > getTopicMetadataReplicaNotAvailable STARTED

kafka.server.MetadataCacheTest > getTopicMetadataReplicaNotAvailable PASSED

kafka.server.MetadataCacheTest > getTopicMetadataPartitionLeaderNotAvailable 
STARTED

kafka.server.MetadataCacheTest > getTopicMetadataPartitionLeaderNotAvailable 
PASSED

kafka.server.MetadataCacheTest > getAliveBrokersShouldNotBeMutatedByUpdateCache 
STARTED

kafka.server.MetadataCacheTest > getAliveBrokersShouldNotBeMutatedByUpdateCache 
PASSED

kafka.server.MetadataCacheTest > getTopicMetadataNonExistingTopics STARTED

kafka.server.MetadataCacheTest > getTopicMetadataNonExistingTopics PASSED

kafka.server.FetchRequestTest > testBrokerRespectsPartitionsOrderAndSizeLimits 
STARTED

kafka.server.FetchRequestTest > testBrokerRespectsPartitionsOrderAndSizeLimits 
PASSED

kafka.server.FetchRequestTest > 
testDownConversionFromBatchedToUnbatchedResp

Re: [DISCUSS] KIP-204 : adding records deletion operation to the new Admin Client API

2017-10-23 Thread Colin McCabe
On Mon, Oct 23, 2017, at 01:37, Tom Bentley wrote:
> At the risk of muddying the waters further, have you considered
> "RecordsToDelete" as the name of the class? It's both shorter and more
> descriptive imho.

+1 for RecordsToDelete

> 
> Also "deleteBefore()" as the factory method name isn't very future proof
> if
> we came to support time-based deletion. Something like "beforeOffset()"
> would be clearer, imho.

Great idea.

best,
Colin

> 
> Putting these together: RecordsToDelete.beforeOffset() seems much clearer
> to me than DeleteRecordsTarget.deleteBefore()
> 
> 
> On 23 October 2017 at 08:45, Paolo Patierno  wrote:
> 
> > About the name I just started to have a doubt about DeletetionTarget
> > because it could be bounded to any deletion operation (i.e. delete topic,
> > ...) and not just what we want now, so records deletion.
> >
> > I have updated the KIP-204 using DeleteRecordsTarget so it's clear that
> > it's related to the delete records operation and what it means, so the
> > target for such operation.
> >
> >
> > Paolo Patierno
> > Senior Software Engineer (IoT) @ Red Hat
> > Microsoft MVP on Azure & IoT
> > Microsoft Azure Advisor
> >
> > Twitter : @ppatierno
> > Linkedin : paolopatierno
> > Blog : DevExperience
> >
> >
> > 
> > From: Paolo Patierno 
> > Sent: Monday, October 23, 2017 7:38 AM
> > To: dev@kafka.apache.org
> > Subject: Re: [DISCUSS] KIP-204 : adding records deletion operation to the
> > new Admin Client API
> >
> > Hi Colin,
> >
> > I was using the long primitive in the code but not updated the KIP yet,
> > sorry ... now it's updated !
> >
> > At same time I agree on using DeletionTarget ... KIP updated !
> >
> >
> > Regarding the deleteBefore factory method, it's a pattern already used
> > witn NewPartitions.increaseTo which I think it's really clear and give us
> > more possibility to evolve this DeletionTarget class if we'll add different
> > ways to specify such target not only offset based.
> >
> >
> > Thanks,
> >
> >
> > Paolo Patierno
> > Senior Software Engineer (IoT) @ Red Hat
> > Microsoft MVP on Azure & IoT
> > Microsoft Azure Advisor
> >
> > Twitter : @ppatierno
> > Linkedin : paolopatierno
> > Blog : DevExperience
> >
> >
> > 
> > From: Colin McCabe 
> > Sent: Friday, October 20, 2017 8:18 PM
> > To: dev@kafka.apache.org
> > Subject: Re: [DISCUSS] KIP-204 : adding records deletion operation to the
> > new Admin Client API
> >
> > > /** Describe records to delete */
> >  > public class DeleteRecords {
> >  > private Long offset;
> >
> > "DeleteRecords" doesn't really explain what the class is, though.  How
> > about "DeletionTarget"?  Also, why do we need a Long object rather than
> > a long primitive?
> >
> >  >
> >  > /**
> >  > * Delete all the records before the given {@code offset}
> >  > */
> >  > public static DeleteRecords deleteBefore(Long offset) { ... }
> >
> > This seems confusing to me.  What's wrong with a regular constructor for
> > DeletionTarget?
> >
> > best,
> > Colin
> >
> >
> > On Fri, Oct 20, 2017, at 01:28, Paolo Patierno wrote:
> > > Hi all,
> > >
> > >
> > > I have just updated the KIP with your suggestion.
> > >
> > > I'm going to continue implementation and tests with these changes,
> > > waiting for further discussion.
> > >
> > >
> > > Thanks,
> > >
> > >
> > > Paolo Patierno
> > > Senior Software Engineer (IoT) @ Red Hat
> > > Microsoft MVP on Azure & IoT
> > > Microsoft Azure Advisor
> > >
> > > Twitter : @ppatierno
> > > Linkedin : paolopatierno
> > > Blog : DevExperience
> > >
> > >
> > > 
> > > From: Paolo Patierno 
> > > Sent: Thursday, October 19, 2017 1:37 PM
> > > To: dev@kafka.apache.org
> > > Subject: Re: [DISCUSS] KIP-204 : adding records deletion operation to the
> > > new Admin Client API
> > >
> > > @Colin Yes you are right I'll update the KIP-204 mentioning the related
> > > ACL permission DELETE on TOPIC
> > >
> > >
> > > @Dong @Ismael
> > >
> > >
> > > Considering future improvements on this, it makes sense to me using a
> > > class instead of a Long.
> > >
> > > Maybe the name could be just "DeleteRecords" (as "NewPartitions") having
> > > a deleteBefore(Long) factory method for a simple creation when you need
> > > to delete before the specified offset.
> > >
> > >
> > > Thanks,
> > >
> > >
> > > Paolo Patierno
> > > Senior Software Engineer (IoT) @ Red Hat
> > > Microsoft MVP on Azure & IoT
> > > Microsoft Azure Advisor
> > >
> > > Twitter : @ppatierno
> > > Linkedin : paolopatierno
> > > Blog : DevExperience

Jenkins build is back to normal : kafka-trunk-jdk9 #148

2017-10-23 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : kafka-trunk-jdk7 #2917

2017-10-23 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-6106) Postpone normal processing of tasks within a thread until restoration of all tasks have completed

2017-10-23 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-6106:


 Summary: Postpone normal processing of tasks within a thread until 
restoration of all tasks have completed
 Key: KAFKA-6106
 URL: https://issues.apache.org/jira/browse/KAFKA-6106
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Affects Versions: 0.11.0.1, 1.0.0
Reporter: Guozhang Wang
Assignee: Guozhang Wang


Let's say a stream thread hosts multiple tasks, A and B. At the very beginning 
when A and B are assigned to the thread, the thread state is 
{{TASKS_ASSIGNED}}, and the thread start restoring these two tasks during this 
state using the restore consumer while using normal consumer for heartbeating.

If task A's restoration has completed earlier than task B, then the thread will 
start processing A immediately even when it is still in the {{TASKS_ASSIGNED}} 
phase. But processing task A will slow down restoration of task B since it is 
single-thread. So the thread's transition to {{RUNNING}} when all of its 
assigned tasks have completed restoring and now can be processed will be 
delayed.

Note that the streams instance's state will only transit to {{RUNNING}} when 
all of its threads have transit to {{RUNNING}}, so the instance's transition 
will also be delayed by this scenario.

We'd better to not start processing ready tasks immediately, but instead focus 
on restoration during the {{TASKS_ASSIGNED}} state to shorten the overall time 
of the instance's state transition.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Jenkins build is back to normal : kafka-1.0-jdk7 #53

2017-10-23 Thread Apache Jenkins Server
See 




Re: [VOTE] KIP-207:The Offsets which ListOffsetsResponse returns should monotonically increase even during a partition leader change

2017-10-23 Thread Jason Gustafson
Thanks for the KIP. I'm assuming the new behavior only affects ListOffsets
requests from the consumer. Might be worth mentioning that in the KIP.
Also, does it affect all ListOffsets requests, or only those that specify
the latest offset?

-Jason

On Wed, Oct 18, 2017 at 9:15 AM, Colin McCabe  wrote:

> On Wed, Oct 18, 2017, at 04:09, Ismael Juma wrote:
> > Thanks for the KIP, +1 (binding). A few comments:
> >
> > 1. I agree with Jun about LEADER_NOT_AVAILABLE for the error code for
> > older
> > versions.
> > 2. OffsetNotAvailableException seems clear enough (i.e. we don't need the
> > "ForPartition" part)
>
> Yeah, that is shorter and probably clearer.  Changed.
>
> > 3. The KIP seems to be missing the compatibility section.
>
> Added.
>
> > 4. It would be good to mention that it's now possible for a fetch to
> > succeed while list offsets will not for a period of time. And for older
> > versions, the latter will return LeaderNotAvailable while the former
> > would
> > work fine, which is a bit unexpected. Not much we can do about it, but
> > worth mentioning it in my opinion.
>
> Fair enough
>
> cheers,
> Colin
>
> >
> > Ismael
> >
> > On Tue, Oct 17, 2017 at 9:26 PM, Jun Rao  wrote:
> >
> > > Hi, Colin,
> > >
> > > Thanks for the KIP. +1. Just a minor comment. For the old client
> requests,
> > > would it be better to return a LEADER_NOT_AVAILABLE error instead?
> > >
> > > Jun
> > >
> > > On Tue, Oct 17, 2017 at 11:11 AM, Colin McCabe 
> wrote:
> > >
> > > > Hi all,
> > > >
> > > > I'd like to start the voting process for KIP-207:The  Offsets which
> > > > ListOffsetsResponse returns should monotonically increase even
> during a
> > > > partition leader change.
> > > >
> > > > See
> > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > 207%3A+Offsets+returned+by+ListOffsetsResponse+should+be+
> > > > monotonically+increasing+even+during+a+partition+leader+change
> > > > for details.
> > > >
> > > > The voting process will run for at least 72 hours.
> > > >
> > > > regards,
> > > > Colin
> > > >
> > >
>


[GitHub] kafka pull request #3885: MINOR: Added ">" prompt in examples where kafka-co...

2017-10-23 Thread ppatierno
Github user ppatierno closed the pull request at:

https://github.com/apache/kafka/pull/3885


---


Re: [VOTE] KIP-171 - Extend Consumer Group Reset Offset for Stream Application

2017-10-23 Thread Guozhang Wang
Thanks Jorge for driving this KIP! +1 (binding).


Guozhang

On Mon, Oct 16, 2017 at 2:11 PM, Bill Bejeck  wrote:

> +1
>
> Thanks,
> Bill
>
> On Fri, Oct 13, 2017 at 6:36 PM, Ted Yu  wrote:
>
> > +1
> >
> > On Fri, Oct 13, 2017 at 3:32 PM, Matthias J. Sax 
> > wrote:
> >
> > > +1
> > >
> > >
> > >
> > > On 9/11/17 3:04 PM, Jorge Esteban Quilcate Otoya wrote:
> > > > Hi All,
> > > >
> > > > It seems that there is no further concern with the KIP-171.
> > > > At this point we would like to start the voting process.
> > > >
> > > > The KIP can be found here:
> > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > 171+-+Extend+Consumer+Group+Reset+Offset+for+Stream+Application
> > > >
> > > >
> > > > Thanks!
> > > >
> > >
> > >
> >
>



-- 
-- Guozhang


Jenkins build is back to normal : kafka-trunk-jdk8 #2162

2017-10-23 Thread Apache Jenkins Server
See 




Re: [VOTE] 1.0.0 RC2

2017-10-23 Thread Thomas Crayford
We (Heroku) have run our usual set of perf tests against 1.0 RC2 and found
no notable issues so far. Once RC3 is out we will test that too.

Thanks

Tom Crayford
Heroku Kafka

On Tue, Oct 17, 2017 at 5:47 PM, Guozhang Wang  wrote:

> Hello Kafka users, developers and client-developers,
>
> This is the third candidate for release of Apache Kafka 1.0.0. The main PRs
> that gets merged in after RC1 are the following:
>
> https://github.com/apache/kafka/commit/dc6bfa553e73ffccd1e604963e076c
> 78d8ddcd69
>
> It's worth noting that starting in this version we are using a different
> version protocol with three digits: *major.minor.bug-fix*
>
> Any and all testing is welcome, but the following areas are worth
> highlighting:
>
> 1. Client developers should verify that their clients can produce/consume
> to/from 1.0.0 brokers (ideally with compressed and uncompressed data).
> 2. Performance and stress testing. Heroku and LinkedIn have helped with
> this in the past (and issues have been found and fixed).
> 3. End users can verify that their apps work correctly with the new
> release.
>
> This is a major version release of Apache Kafka. It includes 29 new KIPs.
> See the release notes and release plan
> (*https://cwiki.apache.org/confluence/pages/viewpage.
> action?pageId=71764913
>  >*)
> for more details. A few feature highlights:
>
> * Java 9 support with significantly faster TLS and CRC32C implementations
> * JBOD improvements: disk failure only disables failed disk but not the
> broker (KIP-112/KIP-113)
> * Controller improvements: async ZK access for faster administrative
> request handling
> * Newly added metrics across all the modules (KIP-164, KIP-168, KIP-187,
> KIP-188, KIP-196)
> * Kafka Streams API improvements (KIP-120 / 130 / 138 / 150 / 160 / 161),
> and drop compatibility "Evolving" annotations
>
> Release notes for the 1.0.0 release:
> *http://home.apache.org/~guozhang/kafka-1.0.0-rc2/RELEASE_NOTES.html
> *
>
>
>
> *** Please download, test and vote by Friday, October 20, 8pm PT
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> http://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> *http://home.apache.org/~guozhang/kafka-1.0.0-rc2/
> *
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/org/apache/kafka/
>
> * Javadoc:
> *http://home.apache.org/~guozhang/kafka-1.0.0-rc2/javadoc/
> *
>
> * Tag to be voted upon (off 1.0 branch) is the 1.0.0-rc2 tag:
>
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=
> 51d5f12e190a38547839c7d2710c97faaeaca586
>
> * Documentation:
> Note the documentation can't be pushed live due to changes that will not go
> live until the release. You can manually verify by downloading
> http://home.apache.org/~guozhang/kafka-1.0.0-rc2/
> kafka_2.11-1.0.0-site-docs.tgz
>
> * Successful Jenkins builds for the 1.0.0 branch:
> Unit/integration tests: https://builds.apache.org/job/kafka-1.0-jdk7/40/
> System test: https://jenkins.confluent.io/job/system-test-kafka-1.0/6/
>
>
> /**
>
>
> Thanks,
> -- Guozhang
>


Request for access to the KIP wiki page

2017-10-23 Thread Sönke Liebau
Hi all,

could someone please grant my user (userid: soenkeliebau) permissions to
the KIP page of the wiki, I'd like to create a proposal.

Thanks!

Sönke


Re: [VOTE] KIP-207:The Offsets which ListOffsetsResponse returns should monotonically increase even during a partition leader change

2017-10-23 Thread Colin McCabe
On Mon, Oct 23, 2017, at 10:29, Jason Gustafson wrote:
> Thanks for the KIP. I'm assuming the new behavior only affects
> ListOffsets requests from the consumer.

That's a very good point.  I will add a caveat that we only apply the
KIP-207 behavior to requests from clients, not requests from other
brokers (such as the ones made by ReplicaFetcherThread).

> Might be worth mentioning that in the KIP.
> Also, does it affect all ListOffsets requests, or only those that specify
> the latest offset?

I don't feel great about allowing someone to ask for the offset at time
T, get back X, and then ask again for the offset at T the next second
and get back InvalidOffsetException.  So it's probably best just to
apply the KIP-207 behavior to all ListOffsets requests from consumers.

Thinking about it a bit more, we should disable the KIP-207 behavior
when unclean leader elections are enabled on the broker.  When unclean
leader elections are enabled, data loss is possible.  So we cannot
guarantee that offsets will always go forwards, even in theory, in this
mode.

I update the kip-- check it out.

best,
Colin


> 
> -Jason
> 
> On Wed, Oct 18, 2017 at 9:15 AM, Colin McCabe  wrote:
> 
> > On Wed, Oct 18, 2017, at 04:09, Ismael Juma wrote:
> > > Thanks for the KIP, +1 (binding). A few comments:
> > >
> > > 1. I agree with Jun about LEADER_NOT_AVAILABLE for the error code for
> > > older
> > > versions.
> > > 2. OffsetNotAvailableException seems clear enough (i.e. we don't need the
> > > "ForPartition" part)
> >
> > Yeah, that is shorter and probably clearer.  Changed.
> >
> > > 3. The KIP seems to be missing the compatibility section.
> >
> > Added.
> >
> > > 4. It would be good to mention that it's now possible for a fetch to
> > > succeed while list offsets will not for a period of time. And for older
> > > versions, the latter will return LeaderNotAvailable while the former
> > > would
> > > work fine, which is a bit unexpected. Not much we can do about it, but
> > > worth mentioning it in my opinion.
> >
> > Fair enough
> >
> > cheers,
> > Colin
> >
> > >
> > > Ismael
> > >
> > > On Tue, Oct 17, 2017 at 9:26 PM, Jun Rao  wrote:
> > >
> > > > Hi, Colin,
> > > >
> > > > Thanks for the KIP. +1. Just a minor comment. For the old client
> > requests,
> > > > would it be better to return a LEADER_NOT_AVAILABLE error instead?
> > > >
> > > > Jun
> > > >
> > > > On Tue, Oct 17, 2017 at 11:11 AM, Colin McCabe 
> > wrote:
> > > >
> > > > > Hi all,
> > > > >
> > > > > I'd like to start the voting process for KIP-207:The  Offsets which
> > > > > ListOffsetsResponse returns should monotonically increase even
> > during a
> > > > > partition leader change.
> > > > >
> > > > > See
> > > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > > 207%3A+Offsets+returned+by+ListOffsetsResponse+should+be+
> > > > > monotonically+increasing+even+during+a+partition+leader+change
> > > > > for details.
> > > > >
> > > > > The voting process will run for at least 72 hours.
> > > > >
> > > > > regards,
> > > > > Colin
> > > > >
> > > >
> >


Re: [DISCUSS]: KIP-159: Introducing Rich functions to Streams

2017-10-23 Thread Matthias J. Sax
Interesting.

I thought that https://issues.apache.org/jira/browse/KAFKA-4125 is the
main motivation for this KIP :)

I also think, that we should not expose the full ProcessorContext at DSL
level.

Thus, overall I am not even sure if we should fix KAFKA-3907 at all.
Manual commits are something DSL users should not worry about -- and if
one really needs this, an advanced user can still insert a dummy
`transform` to request a commit from there.

-Matthias


On 10/18/17 5:39 AM, Jeyhun Karimov wrote:
> Hi,
> 
> The main intuition is to solve [1], which is part of this KIP.
> I agree with you that this might not seem semantically correct as we are
> not committing record state.
> Alternatively, we can remove commit() from RecordContext and add
> ProcessorContext (which has commit() method) as an extra argument to Rich
> methods:
> 
> instead of
> public interface RichValueMapper {
> VR apply(final V value,
>  final K key,
>  final RecordContext recordContext);
> }
> 
> we can adopt
> 
> public interface RichValueMapper {
> VR apply(final V value,
>  final K key,
>  final RecordContext recordContext,
>  final ProcessorContext processorContext);
> }
> 
> 
> However, in this case, a user can get confused as ProcessorContext and
> RecordContext share some methods with the same name.
> 
> 
> Cheers,
> Jeyhun
> 
> 
> [1] https://issues.apache.org/jira/browse/KAFKA-3907
> 
> 
> On Tue, Oct 17, 2017 at 3:19 AM Guozhang Wang  wrote:
> 
>> Regarding #6 above, I'm still not clear why we would need `commit()` in
>> both ProcessorContext and RecordContext, could you elaborate a bit more?
>>
>> To me `commit()` is really a processor context not a record context
>> logically: when you call that function, it means we would commit the state
>> of the whole task up to this processed record, not only that single record
>> itself.
>>
>>
>> Guozhang
>>
>> On Mon, Oct 16, 2017 at 9:19 AM, Jeyhun Karimov 
>> wrote:
>>
>>> Hi,
>>>
>>> Thanks for the feedback.
>>>
>>>
>>> 0. RichInitializer definition seems missing.
>>>
>>>
>>>
>>> - Fixed.
>>>
>>>
>>>  I'd suggest moving the key parameter in the RichValueXX and RichReducer
 after the value parameters, as well as in the templates; e.g.
 public interface RichValueJoiner {
 VR apply(final V1 value1, final V2 value2, final K key, final
 RecordContext
 recordContext);
 }
>>>
>>>
>>>
>>> - Fixed.
>>>
>>>
>>> 2. Some of the listed functions are not necessary since their pairing
>> APIs
 are being deprecated in 1.0 already:
  KGroupedStream groupBy(final RichKeyValueMapper> ?
 super V, KR> selector,
final Serde keySerde,
final Serde valSerde);
  KStream leftJoin(final KTable table,
  final RichValueJoiner> super
 V,
 ? super VT, ? extends VR> joiner,
  final Serde keySerde,
  final Serde valSerde);
>>>
>>>
>>> -Fixed
>>>
>>> 3. For a few functions where we are adding three APIs for a combo of both
 mapper / joiner, or both initializer / aggregator, or adder /
>> subtractor,
 I'm wondering if we can just keep one that use "rich" functions for
>> both;
 so that we can have less overloads and let users who only want to
>> access
 one of them to just use dummy parameter declarations. For example:

  KStream join(final GlobalKTable
>> globalKTable,
  final RichKeyValueMapper>>> super
  V, ? extends GK> keyValueMapper,
  final RichValueJoiner> super
 V,
 ? super GV, ? extends RV> joiner);
>>>
>>>
>>>
>>> -Agreed. Fixed.
>>>
>>>
>>> 4. For TimeWindowedKStream, I'm wondering why we do not make its
 Initializer also "rich" functions? I.e.
>>>
>>>
>>> - It was a typo. Fixed.
>>>
>>>
>>> 5. We need to move "RecordContext" from o.a.k.processor.internals to
 o.a.k.processor.

 6. I'm not clear why we want to move `commit()` from ProcessorContext
>> to
 RecordContext?

>>>
>>> -
>>> Because it makes sense logically and  to reduce code maintenance (both
>>> interfaces have offset() timestamp() topic() partition() methods),  I
>>> inherit ProcessorContext from RecordContext.
>>> Since we need commit() method both in ProcessorContext and in
>> RecordContext
>>> I move commit() method to parent class (RecordContext).
>>>
>>>
>>> Cheers,
>>> Jeyhun
>>>
>>>
>>>
>>> On Wed, Oct 11, 2017 at 12:59 AM, Guozhang Wang 
>>> wrote:
>>>
 Jeyhun,

 Thanks for the updated KIP, here are my comments.

 0. RichInitializer definition seems missing.

 1. I'd suggest moving the key parameter in the RichValueXX and
>>> RichReducer
 after the value parameters, as well as in the templates; e.g.

 public interface RichValueJoiner {
 VR apply(final V1 

Re: Request for access to the KIP wiki page

2017-10-23 Thread Guozhang Wang
Hello Sönke,


It's done. Cheers.


Guozhang


On Mon, Oct 23, 2017 at 11:08 AM, Sönke Liebau <
soenke.lie...@opencore.com.invalid> wrote:

> Hi all,
>
> could someone please grant my user (userid: soenkeliebau) permissions to
> the KIP page of the wiki, I'd like to create a proposal.
>
> Thanks!
>
> Sönke
>



-- 
-- Guozhang


[jira] [Created] (KAFKA-6107) SCRAM user add fails if Kafka has never been started

2017-10-23 Thread Dustin Cote (JIRA)
Dustin Cote created KAFKA-6107:
--

 Summary: SCRAM user add fails if Kafka has never been started
 Key: KAFKA-6107
 URL: https://issues.apache.org/jira/browse/KAFKA-6107
 Project: Kafka
  Issue Type: Bug
  Components: tools, zkclient
Affects Versions: 0.11.0.0
Reporter: Dustin Cote
Priority: Minor


When trying to add a SCRAM user in ZooKeeper without having ever starting 
Kafka, the kafka-configs tool does not handle it well. This is a common use 
case because starting a new cluster where you want SCRAM for inter broker 
communication would generally result in seeing this problem. Today, the 
workaround is to start Kafka, add the user, then restart Kafka. Here's how to 
reproduce:

1) Start ZooKeeper
2) Run 
{code}
bin/kafka-configs --zookeeper localhost:2181 --alter --add-config 
'SCRAM-SHA-256=[iterations=8192,password=broker_pwd],SCRAM-SHA-512=[password=broker_pwd]'
 --entity-type users --entity-name broker
{code}

This will result in:
{code}
bin/kafka-configs --zookeeper localhost:2181 --alter --add-config 
'SCRAM-SHA-256=[iterations=8192,password=broker_pwd],SCRAM-SHA-512=[password=broker_pwd]'
 --entity-type users --entity-name broker
Error while executing config command 
org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode 
for /config/changes/config_change_
org.I0Itec.zkclient.exception.ZkNoNodeException: 
org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode 
for /config/changes/config_change_
at org.I0Itec.zkclient.exception.ZkException.create(ZkException.java:47)
at org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:1001)
at org.I0Itec.zkclient.ZkClient.create(ZkClient.java:528)
at 
org.I0Itec.zkclient.ZkClient.createPersistentSequential(ZkClient.java:444)
at kafka.utils.ZkPath.createPersistentSequential(ZkUtils.scala:1045)
at kafka.utils.ZkUtils.createSequentialPersistentPath(ZkUtils.scala:527)
at 
kafka.admin.AdminUtils$.kafka$admin$AdminUtils$$changeEntityConfig(AdminUtils.scala:600)
at 
kafka.admin.AdminUtils$.changeUserOrUserClientIdConfig(AdminUtils.scala:551)
at kafka.admin.AdminUtilities$class.changeConfigs(AdminUtils.scala:63)
at kafka.admin.AdminUtils$.changeConfigs(AdminUtils.scala:72)
at kafka.admin.ConfigCommand$.alterConfig(ConfigCommand.scala:101)
at kafka.admin.ConfigCommand$.main(ConfigCommand.scala:68)
at kafka.admin.ConfigCommand.main(ConfigCommand.scala)
Caused by: org.apache.zookeeper.KeeperException$NoNodeException: 
KeeperErrorCode = NoNode for /config/changes/config_change_
at org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:783)
at org.I0Itec.zkclient.ZkConnection.create(ZkConnection.java:100)
at org.I0Itec.zkclient.ZkClient$3.call(ZkClient.java:531)
at org.I0Itec.zkclient.ZkClient$3.call(ZkClient.java:528)
at org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:991)
... 11 more
{code}

The command doesn't appear to fail but it does throw an exception and return an 
error.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #4095: KAFKA-5140: Fix reset integration test

2017-10-23 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4095


---


[jira] [Resolved] (KAFKA-5140) Flaky ResetIntegrationTest

2017-10-23 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-5140.
--
   Resolution: Fixed
Fix Version/s: (was: 0.11.0.0)
   0.11.0.2
   1.0.0

Issue resolved by pull request 4095
[https://github.com/apache/kafka/pull/4095]

> Flaky ResetIntegrationTest
> --
>
> Key: KAFKA-5140
> URL: https://issues.apache.org/jira/browse/KAFKA-5140
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Affects Versions: 0.10.2.0
>Reporter: Matthias J. Sax
>Assignee: Guozhang Wang
> Fix For: 1.0.0, 0.11.0.2
>
>
> {noformat}
> org.apache.kafka.streams.integration.ResetIntegrationTest > 
> testReprocessingFromScratchAfterResetWithIntermediateUserTopic FAILED
> java.lang.AssertionError: 
> Expected: <[KeyValue(2986681642095, 1), KeyValue(2986681642055, 1), 
> KeyValue(2986681642075, 1), KeyValue(2986681642035, 1), 
> KeyValue(2986681642095, 1), KeyValue(2986681642055, 1), 
> KeyValue(2986681642115, 1), KeyValue(2986681642075, 1), 
> KeyValue(2986681642075, 2), KeyValue(2986681642095, 2), 
> KeyValue(2986681642115, 1), KeyValue(2986681642135, 1), 
> KeyValue(2986681642095, 2), KeyValue(2986681642115, 2), 
> KeyValue(2986681642155, 1), KeyValue(2986681642135, 1), 
> KeyValue(2986681642115, 2), KeyValue(2986681642135, 2), 
> KeyValue(2986681642155, 1), KeyValue(2986681642175, 1), 
> KeyValue(2986681642135, 2), KeyValue(2986681642155, 2), 
> KeyValue(2986681642175, 1), KeyValue(2986681642195, 1), 
> KeyValue(2986681642135, 3), KeyValue(2986681642155, 2), 
> KeyValue(2986681642175, 2), KeyValue(2986681642195, 1), 
> KeyValue(2986681642155, 3), KeyValue(2986681642175, 2), 
> KeyValue(2986681642195, 2), KeyValue(2986681642155, 3), 
> KeyValue(2986681642175, 3), KeyValue(2986681642195, 2), 
> KeyValue(2986681642155, 4), KeyValue(2986681642175, 3), 
> KeyValue(2986681642195, 3)]>
>  but: was <[KeyValue(2986681642095, 1), KeyValue(2986681642055, 1), 
> KeyValue(2986681642075, 1), KeyValue(2986681642035, 1), 
> KeyValue(2986681642095, 1), KeyValue(2986681642055, 1), 
> KeyValue(2986681642115, 1), KeyValue(2986681642075, 1), 
> KeyValue(2986681642075, 2), KeyValue(2986681642095, 2), 
> KeyValue(2986681642115, 1), KeyValue(2986681642135, 1), 
> KeyValue(2986681642095, 2), KeyValue(2986681642115, 2), 
> KeyValue(2986681642155, 1), KeyValue(2986681642135, 1), 
> KeyValue(2986681642115, 2), KeyValue(2986681642135, 2), 
> KeyValue(2986681642155, 1), KeyValue(2986681642175, 1), 
> KeyValue(2986681642135, 2), KeyValue(2986681642155, 2), 
> KeyValue(2986681642175, 1), KeyValue(2986681642195, 1), 
> KeyValue(2986681642135, 3), KeyValue(2986681642155, 2), 
> KeyValue(2986681642175, 2), KeyValue(2986681642195, 1), 
> KeyValue(2986681642155, 3), KeyValue(2986681642175, 2), 
> KeyValue(2986681642195, 2), KeyValue(2986681642155, 3)]>
> at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
> at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:8)
> at 
> org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromScratchAfterResetWithIntermediateUserTopic(ResetIntegrationTest.java:190)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [DISCUSS]: KIP-159: Introducing Rich functions to Streams

2017-10-23 Thread Jeyhun Karimov
Hi Matthias,

It is probably my bad, the discussion was a bit long in this thread. I
proposed the related issue in the related KIP discuss thread [1] and got an
approval [2,3].
Maybe I misunderstood.

[1]
http://search-hadoop.com/m/Kafka/uyzND19Asmg1GKKXT1?subj=Re+DISCUSS+KIP+159+Introducing+Rich+functions+to+Streams
[2]
http://search-hadoop.com/m/Kafka/uyzND1kpct22GKKXT1?subj=Re+DISCUSS+KIP+159+Introducing+Rich+functions+to+Streams
[3]
http://search-hadoop.com/m/Kafka/uyzND1G6TGIGKKXT1?subj=Re+DISCUSS+KIP+159+Introducing+Rich+functions+to+Streams


On Mon, Oct 23, 2017 at 8:44 PM Matthias J. Sax 
wrote:

> Interesting.
>
> I thought that https://issues.apache.org/jira/browse/KAFKA-4125 is the
> main motivation for this KIP :)
>
> I also think, that we should not expose the full ProcessorContext at DSL
> level.
>
> Thus, overall I am not even sure if we should fix KAFKA-3907 at all.
> Manual commits are something DSL users should not worry about -- and if
> one really needs this, an advanced user can still insert a dummy
> `transform` to request a commit from there.
>
> -Matthias
>
>
> On 10/18/17 5:39 AM, Jeyhun Karimov wrote:
> > Hi,
> >
> > The main intuition is to solve [1], which is part of this KIP.
> > I agree with you that this might not seem semantically correct as we are
> > not committing record state.
> > Alternatively, we can remove commit() from RecordContext and add
> > ProcessorContext (which has commit() method) as an extra argument to Rich
> > methods:
> >
> > instead of
> > public interface RichValueMapper {
> > VR apply(final V value,
> >  final K key,
> >  final RecordContext recordContext);
> > }
> >
> > we can adopt
> >
> > public interface RichValueMapper {
> > VR apply(final V value,
> >  final K key,
> >  final RecordContext recordContext,
> >  final ProcessorContext processorContext);
> > }
> >
> >
> > However, in this case, a user can get confused as ProcessorContext and
> > RecordContext share some methods with the same name.
> >
> >
> > Cheers,
> > Jeyhun
> >
> >
> > [1] https://issues.apache.org/jira/browse/KAFKA-3907
> >
> >
> > On Tue, Oct 17, 2017 at 3:19 AM Guozhang Wang 
> wrote:
> >
> >> Regarding #6 above, I'm still not clear why we would need `commit()` in
> >> both ProcessorContext and RecordContext, could you elaborate a bit more?
> >>
> >> To me `commit()` is really a processor context not a record context
> >> logically: when you call that function, it means we would commit the
> state
> >> of the whole task up to this processed record, not only that single
> record
> >> itself.
> >>
> >>
> >> Guozhang
> >>
> >> On Mon, Oct 16, 2017 at 9:19 AM, Jeyhun Karimov 
> >> wrote:
> >>
> >>> Hi,
> >>>
> >>> Thanks for the feedback.
> >>>
> >>>
> >>> 0. RichInitializer definition seems missing.
> >>>
> >>>
> >>>
> >>> - Fixed.
> >>>
> >>>
> >>>  I'd suggest moving the key parameter in the RichValueXX and
> RichReducer
>  after the value parameters, as well as in the templates; e.g.
>  public interface RichValueJoiner {
>  VR apply(final V1 value1, final V2 value2, final K key, final
>  RecordContext
>  recordContext);
>  }
> >>>
> >>>
> >>>
> >>> - Fixed.
> >>>
> >>>
> >>> 2. Some of the listed functions are not necessary since their pairing
> >> APIs
>  are being deprecated in 1.0 already:
>   KGroupedStream groupBy(final RichKeyValueMapper >> ?
>  super V, KR> selector,
> final Serde keySerde,
> final Serde valSerde);
>   KStream leftJoin(final KTable table,
>   final RichValueJoiner >> super
>  V,
>  ? super VT, ? extends VR> joiner,
>   final Serde keySerde,
>   final Serde valSerde);
> >>>
> >>>
> >>> -Fixed
> >>>
> >>> 3. For a few functions where we are adding three APIs for a combo of
> both
>  mapper / joiner, or both initializer / aggregator, or adder /
> >> subtractor,
>  I'm wondering if we can just keep one that use "rich" functions for
> >> both;
>  so that we can have less overloads and let users who only want to
> >> access
>  one of them to just use dummy parameter declarations. For example:
> 
>   KStream join(final GlobalKTable
> >> globalKTable,
>   final RichKeyValueMapper  super
>   V, ? extends GK> keyValueMapper,
>   final RichValueJoiner >> super
>  V,
>  ? super GV, ? extends RV> joiner);
> >>>
> >>>
> >>>
> >>> -Agreed. Fixed.
> >>>
> >>>
> >>> 4. For TimeWindowedKStream, I'm wondering why we do not make its
>  Initializer also "rich" functions? I.e.
> >>>
> >>>
> >>> - It was a typo. Fixed.
> >>>
> >>>
> >>> 5. We need to move "RecordContext" from o.a.k.processor.internals to
>  o.a.k.processor.
> 
>  6. I'm not clear 

[jira] [Created] (KAFKA-6108) Synchronizing on commits and StandbyTasks can be improved

2017-10-23 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-6108:
--

 Summary: Synchronizing on commits and StandbyTasks can be improved
 Key: KAFKA-6108
 URL: https://issues.apache.org/jira/browse/KAFKA-6108
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 1.0.0
Reporter: Matthias J. Sax


In Kafka Streams, we use an optimization that allows us to reuse a source topic 
as changelog topic (and thus, avoid unnecessary data duplication) if we read a 
topic directly as {{KTable}}. To guarantee that {{StandbyTasks}} provide a 
correct state, we need to synchronize the read progress of the {{StandbyTasks}} 
with the processing progress of the main {{StreamTask}} --- otherwise, the 
{{StandbyTasks}} might restore state too much into the future. For this, we 
limit the allowed restore offsets of the {{StandbyTasks}} to be not larger than 
the committed offsets of the {{StreamTask}}.

Furthermore, we buffer all data returned by the restore consumer that is beyond 
the allowed restore-offsets in-memory.

To achieve both goals, we regularly update the max allowed restore offsets 
(this is done task internally) and we also use a flag {{processStandbyRecords}} 
within {{StreamThread}} with the purpose to not call {{poll()}} on the restore 
consumer if our in-memory buffer has already data beyond the allowed max 
restore offsets.

We should consider:
 - unify both places in the code and put the whole logic into a single place 
(suggestion is to use the {{StreamThread}} -- a tasks, does not need to know 
about this optimization)
 - feed only those data into the task, that the task is allowed to restore 
(instead of everything)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Requesting Wiki Comment Permissions

2017-10-23 Thread Jordan Pilat
Hello,

Would somebody please be so kind as to grant me "comment" permissions on the 
Wiki?

The page in question to which I'd like to add a comment is:
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+data+structures+in+Zookeeper

My confluence username is jrpilat

Thanks!
- Jordan


Re: [DISCUSS]: KIP-159: Introducing Rich functions to Streams

2017-10-23 Thread Matthias J. Sax
Fair point. This is a long discussion and I totally forgot that we
discussed this.

Seems I changed my opinion about including KAFKA-3907...

Happy to hear what others think.


-Matthias

On 10/23/17 1:20 PM, Jeyhun Karimov wrote:
> Hi Matthias,
> 
> It is probably my bad, the discussion was a bit long in this thread. I
> proposed the related issue in the related KIP discuss thread [1] and got an
> approval [2,3].
> Maybe I misunderstood.
> 
> [1]
> http://search-hadoop.com/m/Kafka/uyzND19Asmg1GKKXT1?subj=Re+DISCUSS+KIP+159+Introducing+Rich+functions+to+Streams
> [2]
> http://search-hadoop.com/m/Kafka/uyzND1kpct22GKKXT1?subj=Re+DISCUSS+KIP+159+Introducing+Rich+functions+to+Streams
> [3]
> http://search-hadoop.com/m/Kafka/uyzND1G6TGIGKKXT1?subj=Re+DISCUSS+KIP+159+Introducing+Rich+functions+to+Streams
> 
> 
> On Mon, Oct 23, 2017 at 8:44 PM Matthias J. Sax 
> wrote:
> 
>> Interesting.
>>
>> I thought that https://issues.apache.org/jira/browse/KAFKA-4125 is the
>> main motivation for this KIP :)
>>
>> I also think, that we should not expose the full ProcessorContext at DSL
>> level.
>>
>> Thus, overall I am not even sure if we should fix KAFKA-3907 at all.
>> Manual commits are something DSL users should not worry about -- and if
>> one really needs this, an advanced user can still insert a dummy
>> `transform` to request a commit from there.
>>
>> -Matthias
>>
>>
>> On 10/18/17 5:39 AM, Jeyhun Karimov wrote:
>>> Hi,
>>>
>>> The main intuition is to solve [1], which is part of this KIP.
>>> I agree with you that this might not seem semantically correct as we are
>>> not committing record state.
>>> Alternatively, we can remove commit() from RecordContext and add
>>> ProcessorContext (which has commit() method) as an extra argument to Rich
>>> methods:
>>>
>>> instead of
>>> public interface RichValueMapper {
>>> VR apply(final V value,
>>>  final K key,
>>>  final RecordContext recordContext);
>>> }
>>>
>>> we can adopt
>>>
>>> public interface RichValueMapper {
>>> VR apply(final V value,
>>>  final K key,
>>>  final RecordContext recordContext,
>>>  final ProcessorContext processorContext);
>>> }
>>>
>>>
>>> However, in this case, a user can get confused as ProcessorContext and
>>> RecordContext share some methods with the same name.
>>>
>>>
>>> Cheers,
>>> Jeyhun
>>>
>>>
>>> [1] https://issues.apache.org/jira/browse/KAFKA-3907
>>>
>>>
>>> On Tue, Oct 17, 2017 at 3:19 AM Guozhang Wang 
>> wrote:
>>>
 Regarding #6 above, I'm still not clear why we would need `commit()` in
 both ProcessorContext and RecordContext, could you elaborate a bit more?

 To me `commit()` is really a processor context not a record context
 logically: when you call that function, it means we would commit the
>> state
 of the whole task up to this processed record, not only that single
>> record
 itself.


 Guozhang

 On Mon, Oct 16, 2017 at 9:19 AM, Jeyhun Karimov 
 wrote:

> Hi,
>
> Thanks for the feedback.
>
>
> 0. RichInitializer definition seems missing.
>
>
>
> - Fixed.
>
>
>  I'd suggest moving the key parameter in the RichValueXX and
>> RichReducer
>> after the value parameters, as well as in the templates; e.g.
>> public interface RichValueJoiner {
>> VR apply(final V1 value1, final V2 value2, final K key, final
>> RecordContext
>> recordContext);
>> }
>
>
>
> - Fixed.
>
>
> 2. Some of the listed functions are not necessary since their pairing
 APIs
>> are being deprecated in 1.0 already:
>>  KGroupedStream groupBy(final RichKeyValueMapper>>> ?
>> super V, KR> selector,
>>final Serde keySerde,
>>final Serde valSerde);
>>  KStream leftJoin(final KTable table,
>>  final RichValueJoiner>>> super
>> V,
>> ? super VT, ? extends VR> joiner,
>>  final Serde keySerde,
>>  final Serde valSerde);
>
>
> -Fixed
>
> 3. For a few functions where we are adding three APIs for a combo of
>> both
>> mapper / joiner, or both initializer / aggregator, or adder /
 subtractor,
>> I'm wondering if we can just keep one that use "rich" functions for
 both;
>> so that we can have less overloads and let users who only want to
 access
>> one of them to just use dummy parameter declarations. For example:
>>
>>  KStream join(final GlobalKTable
 globalKTable,
>>  final RichKeyValueMapper> super
>>  V, ? extends GK> keyValueMapper,
>>  final RichValueJoiner>>> super
>> V,
>> ? super GV, ? extends RV> joiner);
>
>
>
> -Agreed. Fixed.
>
>
> 4. For 

Build failed in Jenkins: kafka-1.0-jdk7 #54

2017-10-23 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-5140: Fix reset integration test

--
[...truncated 1.82 MB...]
org.apache.kafka.streams.kstream.WindowTest > shouldThrowIfStartIsNegative 
PASSED

org.apache.kafka.streams.kstream.WindowTest > 
shouldNotBeEqualIfStartOrEndIsDifferent STARTED

org.apache.kafka.streams.kstream.WindowTest > 
shouldNotBeEqualIfStartOrEndIsDifferent PASSED

org.apache.kafka.streams.kstream.SessionWindowsTest > 
retentionTimeShouldBeGapIfGapIsLargerThanDefaultRetentionTime STARTED

org.apache.kafka.streams.kstream.SessionWindowsTest > 
retentionTimeShouldBeGapIfGapIsLargerThanDefaultRetentionTime PASSED

org.apache.kafka.streams.kstream.SessionWindowsTest > shouldSetWindowGap STARTED

org.apache.kafka.streams.kstream.SessionWindowsTest > shouldSetWindowGap PASSED

org.apache.kafka.streams.kstream.SessionWindowsTest > 
retentionTimeMustNotBeNegative STARTED

org.apache.kafka.streams.kstream.SessionWindowsTest > 
retentionTimeMustNotBeNegative PASSED

org.apache.kafka.streams.kstream.SessionWindowsTest > windowSizeMustNotBeZero 
STARTED

org.apache.kafka.streams.kstream.SessionWindowsTest > windowSizeMustNotBeZero 
PASSED

org.apache.kafka.streams.kstream.SessionWindowsTest > 
windowSizeMustNotBeNegative STARTED

org.apache.kafka.streams.kstream.SessionWindowsTest > 
windowSizeMustNotBeNegative PASSED

org.apache.kafka.streams.kstream.SessionWindowsTest > 
shouldSetWindowRetentionTime STARTED

org.apache.kafka.streams.kstream.SessionWindowsTest > 
shouldSetWindowRetentionTime PASSED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullTopicsWhenAddingSourceWithPattern STARTED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullTopicsWhenAddingSourceWithPattern PASSED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowZeroTopicsWhenAddingSource STARTED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowZeroTopicsWhenAddingSource PASSED

org.apache.kafka.streams.TopologyTest > shouldFailOnUnknownSource STARTED

org.apache.kafka.streams.TopologyTest > shouldFailOnUnknownSource PASSED

org.apache.kafka.streams.TopologyTest > shouldNotAllowNullNameWhenAddingSink 
STARTED

org.apache.kafka.streams.TopologyTest > shouldNotAllowNullNameWhenAddingSink 
PASSED

org.apache.kafka.streams.TopologyTest > 
multipleSourcesShouldHaveDistinctSubtopologies STARTED

org.apache.kafka.streams.TopologyTest > 
multipleSourcesShouldHaveDistinctSubtopologies PASSED

org.apache.kafka.streams.TopologyTest > 
testNamedTopicMatchesAlreadyProvidedPattern STARTED

org.apache.kafka.streams.TopologyTest > 
testNamedTopicMatchesAlreadyProvidedPattern PASSED

org.apache.kafka.streams.TopologyTest > 
processorsWithSharedStateShouldHaveSameSubtopology STARTED

org.apache.kafka.streams.TopologyTest > 
processorsWithSharedStateShouldHaveSameSubtopology PASSED

org.apache.kafka.streams.TopologyTest > shouldNotAllowToAddTopicTwice STARTED

org.apache.kafka.streams.TopologyTest > shouldNotAllowToAddTopicTwice PASSED

org.apache.kafka.streams.TopologyTest > 
processorWithMultipleSourcesShouldHaveSingleSubtopology STARTED

org.apache.kafka.streams.TopologyTest > 
processorWithMultipleSourcesShouldHaveSingleSubtopology PASSED

org.apache.kafka.streams.TopologyTest > shouldNotAllowToAddStateStoreToSink 
STARTED

org.apache.kafka.streams.TopologyTest > shouldNotAllowToAddStateStoreToSink 
PASSED

org.apache.kafka.streams.TopologyTest > shouldNotAddNullStateStoreSupplier 
STARTED

org.apache.kafka.streams.TopologyTest > shouldNotAddNullStateStoreSupplier 
PASSED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowToAddStateStoreToNonExistingProcessor STARTED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowToAddStateStoreToNonExistingProcessor PASSED

org.apache.kafka.streams.TopologyTest > 
sourceAndProcessorShouldHaveSingleSubtopology STARTED

org.apache.kafka.streams.TopologyTest > 
sourceAndProcessorShouldHaveSingleSubtopology PASSED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullStoreNameWhenConnectingProcessorAndStateStores STARTED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullStoreNameWhenConnectingProcessorAndStateStores PASSED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullNameWhenAddingSourceWithTopic STARTED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullNameWhenAddingSourceWithTopic PASSED

org.apache.kafka.streams.TopologyTest > 
sourceAndProcessorWithStateShouldHaveSingleSubtopology STARTED

org.apache.kafka.streams.TopologyTest > 
sourceAndProcessorWithStateShouldHaveSingleSubtopology PASSED

org.apache.kafka.streams.TopologyTest > shouldNotAllowNullTopicWhenAddingSink 
STARTED

org.apache.kafka.streams.TopologyTest > shouldNotAllowNullTopicWhenAddingSink 
PASSED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowToAddGlobalStoreWithSourceNameEqualsProcessorName STARTED

org.apache.kafka.streams.TopologyTest > 
shouldNotA

[DISCUSS] KIP-212: Enforce set of legal characters for connector names

2017-10-23 Thread Sönke Liebau
All,

I've created a KIP to discuss enforcing of rules on what characters are
allowed in connector names.

Since this may break api calls that are currently working I figured a KIP
is the better way to go than to just create a jira.

I'd love to hear your input on this!


Build failed in Jenkins: kafka-trunk-jdk9 #149

2017-10-23 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-5140: Fix reset integration test

--
[...truncated 1.81 MB...]
org.apache.kafka.streams.processor.FailOnInvalidTimestampTest > 
extractMetadataTimestamp STARTED

org.apache.kafka.streams.processor.FailOnInvalidTimestampTest > 
extractMetadataTimestamp PASSED

org.apache.kafka.streams.processor.WallclockTimestampExtractorTest > 
extractSystemTimestamp STARTED

org.apache.kafka.streams.processor.WallclockTimestampExtractorTest > 
extractSystemTimestamp PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowOffsetResetSourceWithDuplicateSourceName STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowOffsetResetSourceWithDuplicateSourceName PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullProcessorSupplier STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullProcessorSupplier PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotSetApplicationIdToNull STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotSetApplicationIdToNull PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testSourceTopics 
STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > testSourceTopics PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullNameWhenAddingSink STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullNameWhenAddingSink PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testNamedTopicMatchesAlreadyProvidedPattern STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testNamedTopicMatchesAlreadyProvidedPattern PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddInternalTopicConfigWithCompactAndDeleteSetForWindowStores STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddInternalTopicConfigWithCompactAndDeleteSetForWindowStores PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddInternalTopicConfigWithCompactForNonWindowStores STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddInternalTopicConfigWithCompactForNonWindowStores PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddTimestampExtractorWithOffsetResetAndPatternPerSource STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddTimestampExtractorWithOffsetResetAndPatternPerSource PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkWithSameName STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkWithSameName PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkWithSelfParent STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkWithSelfParent PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddProcessorWithSelfParent STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddProcessorWithSelfParent PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAssociateStateStoreNameWhenStateStoreSupplierIsInternal STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAssociateStateStoreNameWhenStateStoreSupplierIsInternal PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithSink STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithSink PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testTopicGroups STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > testTopicGroups PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testBuild STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > testBuild PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowOffsetResetSourceWithoutTopics STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowOffsetResetSourceWithoutTopics PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAddNullStateStoreSupplier STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAddNullStateStoreSupplier PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullNameWhenAddingSource STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullNameWhenAddingSource PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullTopicWhenAddingSink STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullTopicWhenAddingSink PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowToAddGlobalStoreWithSourceNameEqualsProcessorName STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowTo

Build failed in Jenkins: kafka-trunk-jdk7 #2918

2017-10-23 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-5140: Fix reset integration test

--
[...truncated 1.83 MB...]
org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testMultipleConsumersCanReadFromPartitionedTopic PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenDeleted STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenDeleted PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testNoMessagesSentExceptionFromOverlappingPatterns STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testNoMessagesSentExceptionFromOverlappingPatterns PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
shouldAddStateStoreToRegexDefinedSource STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
shouldAddStateStoreToRegexDefinedSource PASSED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > 
shouldCorrectlyRepartitionOnJoinOperationsWithZeroSizedCache STARTED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > 
shouldCorrectlyRepartitionOnJoinOperationsWithZeroSizedCache PASSED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > 
shouldCorrectlyRepartitionOnJoinOperationsWithNonZeroSizedCache STARTED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > 
shouldCorrectlyRepartitionOnJoinOperationsWithNonZeroSizedCache PASSED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldCompactTopicsForStateChangelogs STARTED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldCompactTopicsForStateChangelogs PASSED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldUseCompactAndDeleteForWindowStoreChangelogs STARTED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldUseCompactAndDeleteForWindowStoreChangelogs PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduceSessionWindows STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduceSessionWindows PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduce STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduce PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldAggregate STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldAggregate PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCount STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCount PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldGroupByKey STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldGroupByKey PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCountWithInternalStore STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCountWithInternalStore PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduceWindowed STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduceWindowed PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCountSessionWindows STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCountSessionWindows PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldAggregateWindowed STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldAggregateWindowed PASSED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableLeftJoin STARTED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableLeftJoin PASSED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableJoin STARTED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableJoin PASSED

org.apache.kafka.streams.integration.ResetIntegrationTest > 
testReprocessingFromScratchAfterResetWithIntermediateUserTopic STARTED

org.apache.kafka.streams.integration.ResetIntegrationTest > 
testReprocessingFromScratchAfterResetWithIntermediateUserTopic FAILED
java.lang.IllegalArgumentException: Setting the time to 1508791687000 while 
current time 1508791687475 is newer; this is not allowed
at 
org.apache.kafka.common.utils.MockTime.setCurrentTimeMs(MockTime.java:81)
at 
org.apache.kafka.streams.integration.AbstractResetIntegrationTest.beforePrepareTest(AbstractResetIntegrationTest.java:114)

Re: [DISCUSS] KIP-212: Enforce set of legal characters for connector names

2017-10-23 Thread Randall Hauch
Here's the link to KIP-212:
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=74684586

I do think it's worthwhile to define the rules for connector names.
However, I think it would be better to describe the current restrictions
for names outside of them appearing within URLs. For example, if we can
keep connector names relatively free of constraints but instead define how
names should be encoded when used within URLs (e.g., URL encoding), then we
may not have (m)any backward compatibility issues other than fixing some
bugs related to proper encoding/decoding.

Thoughts?


On Mon, Oct 23, 2017 at 3:44 PM, Sönke Liebau <
soenke.lie...@opencore.com.invalid> wrote:

> All,
>
> I've created a KIP to discuss enforcing of rules on what characters are
> allowed in connector names.
>
> Since this may break api calls that are currently working I figured a KIP
> is the better way to go than to just create a jira.
>
> I'd love to hear your input on this!
>


[jira] [Created] (KAFKA-6109) ResetIntegrationTest may fail due to IllegalArgumentException

2017-10-23 Thread Ted Yu (JIRA)
Ted Yu created KAFKA-6109:
-

 Summary: ResetIntegrationTest may fail due to 
IllegalArgumentException
 Key: KAFKA-6109
 URL: https://issues.apache.org/jira/browse/KAFKA-6109
 Project: Kafka
  Issue Type: Test
Reporter: Ted Yu
Priority: Minor


>From https://builds.apache.org/job/kafka-trunk-jdk7/2918 :
{code}
org.apache.kafka.streams.integration.ResetIntegrationTest > 
testReprocessingFromScratchAfterResetWithIntermediateUserTopic FAILED
java.lang.IllegalArgumentException: Setting the time to 1508791687000 while 
current time 1508791687475 is newer; this is not allowed
at 
org.apache.kafka.common.utils.MockTime.setCurrentTimeMs(MockTime.java:81)
at 
org.apache.kafka.streams.integration.AbstractResetIntegrationTest.beforePrepareTest(AbstractResetIntegrationTest.java:114)
at 
org.apache.kafka.streams.integration.ResetIntegrationTest.before(ResetIntegrationTest.java:55)
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Build failed in Jenkins: kafka-0.11.0-jdk7 #323

2017-10-23 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-6042: Avoid deadlock between two groups with delayed operations

--
[...truncated 2.45 MB...]
org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldNotWriteToChangeLogOnPutIfAbsentWhenValueForKeyExists STARTED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldNotWriteToChangeLogOnPutIfAbsentWhenValueForKeyExists PASSED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldReturnNullOnPutIfAbsentWhenNoPreviousValue STARTED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldReturnNullOnPutIfAbsentWhenNoPreviousValue PASSED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldLogKeyNullOnDelete STARTED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldLogKeyNullOnDelete PASSED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldReturnNullOnGetWhenDoesntExist STARTED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldReturnNullOnGetWhenDoesntExist PASSED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldReturnOldValueOnDelete STARTED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldReturnOldValueOnDelete PASSED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldNotWriteToInnerOnPutIfAbsentWhenValueForKeyExists STARTED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldNotWriteToInnerOnPutIfAbsentWhenValueForKeyExists PASSED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldReturnValueOnGetWhenExists STARTED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldReturnValueOnGetWhenExists PASSED

org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStoreTest > 
shouldRemove STARTED

org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStoreTest > 
shouldRemove PASSED

org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStoreTest > 
shouldPutAndFetch STARTED

org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStoreTest > 
shouldPutAndFetch PASSED

org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStoreTest > 
shouldRollSegments STARTED

org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStoreTest > 
shouldRollSegments PASSED

org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStoreTest > 
shouldFindValuesWithinRange STARTED

org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStoreTest > 
shouldFindValuesWithinRange PASSED

org.apache.kafka.streams.StreamsConfigTest > shouldUseNewConfigsWhenPresent 
STARTED

org.apache.kafka.streams.StreamsConfigTest > shouldUseNewConfigsWhenPresent 
PASSED

org.apache.kafka.streams.StreamsConfigTest > testGetConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > testGetConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > shouldAcceptAtLeastOnce STARTED

org.apache.kafka.streams.StreamsConfigTest > shouldAcceptAtLeastOnce PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldUseCorrectDefaultsWhenNoneSpecified STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldUseCorrectDefaultsWhenNoneSpecified PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerEnableIdempotenceIfEosDisabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerEnableIdempotenceIfEosDisabled PASSED

org.apache.kafka.streams.StreamsConfigTest > defaultSerdeShouldBeConfigured 
STARTED

org.apache.kafka.streams.StreamsConfigTest > defaultSerdeShouldBeConfigured 
PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSetDifferentDefaultsIfEosEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSetDifferentDefaultsIfEosEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldNotOverrideUserConfigRetriesIfExactlyOnceEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldNotOverrideUserConfigRetriesIfExactlyOnceEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > testGetProducerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > testGetProducerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerMaxInFlightRequestPerConnectionsW

Build failed in Jenkins: kafka-trunk-jdk8 #2164

2017-10-23 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-5140: Fix reset integration test

--
[...truncated 1.78 MB...]
org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryStateWithNonZeroSizedCache PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftOuterJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftOuterJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerLeftJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerLeftJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterOuterJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterOuterJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerOuterJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerOuterJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftInnerJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftInnerJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerInnerJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerInnerJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerOuterJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerOuterJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterLeftJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterLeftJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterInnerJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterInnerJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerInnerJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerInnerJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftOuterJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftOuterJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterLeftJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterLeftJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftLeftJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftLeftJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterInnerJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterInnerJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftInnerJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftInnerJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerLeftJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerLeftJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftLeftJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftLeftJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterOuterJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterOuterJoinQueryable PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testShouldReadFromRegexAndNamedTopics STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testShouldReadFromRegexAndNamedTopics PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenCreated STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenCreated PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testMultipleConsumersCanReadFromPartitionedTopic STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testMultipleConsumersCanReadFromPartitionedTopic PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenDeleted STARTED

org.apache.kafka.streams.integration.RegexSourceIntegra

[GitHub] kafka pull request #4121: Kafka 6042: Use state write lock for delayed txn o...

2017-10-23 Thread rajinisivaram
GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/4121

Kafka 6042: Use state write lock for delayed txn operations 



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-6042-txn-delayedlock

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4121.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4121


commit 13b99144ffd1dbbe54105ec124f3b3e6d70d49a6
Author: Rajini Sivaram 
Date:   2017-10-23T21:37:22Z

KAFKA-6042: Use state write lock for delayed txn operations




---


[GitHub] kafka pull request #4122: KAFKA-6096: Add multi-threaded tests for group coo...

2017-10-23 Thread rajinisivaram
GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/4122

KAFKA-6096: Add multi-threaded tests for group coordinator, txn manager



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-6096-deadlock-test

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4122.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4122


commit 5e39e73da84c0057ae4815b066cbc6e9113bc608
Author: Rajini Sivaram 
Date:   2017-10-23T20:59:04Z

KAFKA-6096: Add multi-threaded tests for group coordinator, txn manager




---


[jira] [Created] (KAFKA-6110) Warning when running the broker on Windows

2017-10-23 Thread Vahid Hashemian (JIRA)
Vahid Hashemian created KAFKA-6110:
--

 Summary: Warning when running the broker on Windows
 Key: KAFKA-6110
 URL: https://issues.apache.org/jira/browse/KAFKA-6110
 Project: Kafka
  Issue Type: Bug
Reporter: Vahid Hashemian
Priority: Minor


The following warning appears in the broker log at startup:
{code}
[2017-10-23 15:29:49,370] WARN Error processing 
kafka.log:type=LogManager,name=LogDirectoryOffline,logDirectory=C:\tmp\kafka-logs
 (com.yammer.metrics.reporting.JmxReporter)
javax.management.MalformedObjectNameException: Invalid character ':' in value 
part of property
at javax.management.ObjectName.construct(ObjectName.java:618)
at javax.management.ObjectName.(ObjectName.java:1382)
at 
com.yammer.metrics.reporting.JmxReporter.onMetricAdded(JmxReporter.java:395)
at 
com.yammer.metrics.core.MetricsRegistry.notifyMetricAdded(MetricsRegistry.java:516)
at 
com.yammer.metrics.core.MetricsRegistry.getOrAdd(MetricsRegistry.java:491)
at 
com.yammer.metrics.core.MetricsRegistry.newGauge(MetricsRegistry.java:79)
at 
kafka.metrics.KafkaMetricsGroup$class.newGauge(KafkaMetricsGroup.scala:80)
at kafka.log.LogManager.newGauge(LogManager.scala:50)
at kafka.log.LogManager$$anonfun$6.apply(LogManager.scala:117)
at kafka.log.LogManager$$anonfun$6.apply(LogManager.scala:116)
at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at kafka.log.LogManager.(LogManager.scala:116)
at kafka.log.LogManager$.apply(LogManager.scala:799)
at kafka.server.KafkaServer.startup(KafkaServer.scala:222)
at 
kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
at kafka.Kafka$.main(Kafka.scala:92)
at kafka.Kafka.main(Kafka.scala)
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Build failed in Jenkins: kafka-trunk-jdk7 #2919

2017-10-23 Thread Apache Jenkins Server
See 

--
[...truncated 1.83 MB...]
org.apache.kafka.streams.kstream.JoinWindowsTest > 
shouldUseWindowSizeForMaintainDurationWhenSizeLargerThanDefaultMaintainMs PASSED

org.apache.kafka.streams.kstream.JoinWindowsTest > 
shouldHaveSaneEqualsAndHashCode STARTED

org.apache.kafka.streams.kstream.JoinWindowsTest > 
shouldHaveSaneEqualsAndHashCode PASSED

org.apache.kafka.streams.kstream.JoinWindowsTest > validWindows STARTED

org.apache.kafka.streams.kstream.JoinWindowsTest > validWindows PASSED

org.apache.kafka.streams.kstream.JoinWindowsTest > startTimeShouldNotBeAfterEnd 
STARTED

org.apache.kafka.streams.kstream.JoinWindowsTest > startTimeShouldNotBeAfterEnd 
PASSED

org.apache.kafka.streams.kstream.JoinWindowsTest > 
untilShouldSetMaintainDuration STARTED

org.apache.kafka.streams.kstream.JoinWindowsTest > 
untilShouldSetMaintainDuration PASSED

org.apache.kafka.streams.kstream.JoinWindowsTest > 
retentionTimeMustNoBeSmallerThanWindowSize STARTED

org.apache.kafka.streams.kstream.JoinWindowsTest > 
retentionTimeMustNoBeSmallerThanWindowSize PASSED

org.apache.kafka.streams.kstream.JoinWindowsTest > 
timeDifferenceMustNotBeNegative STARTED

org.apache.kafka.streams.kstream.JoinWindowsTest > 
timeDifferenceMustNotBeNegative PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
shouldComputeWindowsForHoppingWindows STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
shouldComputeWindowsForHoppingWindows PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > shouldSetWindowAdvance 
STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > shouldSetWindowAdvance PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
shouldComputeWindowsForTumblingWindows STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
shouldComputeWindowsForTumblingWindows PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
shouldHaveSaneEqualsAndHashCode STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
shouldHaveSaneEqualsAndHashCode PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
shouldComputeWindowsForBarelyOverlappingHoppingWindows STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
shouldComputeWindowsForBarelyOverlappingHoppingWindows PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > shouldSetWindowSize STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > shouldSetWindowSize PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > windowSizeMustNotBeZero 
STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > windowSizeMustNotBeZero 
PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > advanceIntervalMustNotBeZero 
STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > advanceIntervalMustNotBeZero 
PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > windowSizeMustNotBeNegative 
STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > windowSizeMustNotBeNegative 
PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
advanceIntervalMustNotBeNegative STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
advanceIntervalMustNotBeNegative PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > shouldSetWindowRetentionTime 
STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > shouldSetWindowRetentionTime 
PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
advanceIntervalMustNotBeLargerThanWindowSize STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
advanceIntervalMustNotBeLargerThanWindowSize PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
retentionTimeMustNoBeSmallerThanWindowSize STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
retentionTimeMustNoBeSmallerThanWindowSize PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
shouldUseWindowSizeAsRentitionTimeIfWindowSizeIsLargerThanDefaultRetentionTime 
STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
shouldUseWindowSizeAsRentitionTimeIfWindowSizeIsLargerThanDefaultRetentionTime 
PASSED

org.apache.kafka.streams.kstream.SessionWindowsTest > 
retentionTimeShouldBeGapIfGapIsLargerThanDefaultRetentionTime STARTED

org.apache.kafka.streams.kstream.SessionWindowsTest > 
retentionTimeShouldBeGapIfGapIsLargerThanDefaultRetentionTime PASSED

org.apache.kafka.streams.kstream.SessionWindowsTest > shouldSetWindowGap STARTED

org.apache.kafka.streams.kstream.SessionWindowsTest > shouldSetWindowGap PASSED

org.apache.kafka.streams.kstream.SessionWindowsTest > 
shouldBeEqualWhenGapAndMaintainMsAreTheSame STARTED

org.apache.kafka.streams.kstream.SessionWindowsTest > 
shouldBeEqualWhenGapAndMaintainMsAreTheSame PASSED

org.apache.kafka.streams.kstream.SessionWindowsTest > 
retentionTimeMustNotBeNegative STARTED

org.apache.kafka.streams.kstream.SessionWindowsTest > 
retentionTimeMustNotBeNegative PASSED

org.apache.kafka.streams.kstream.SessionWindowsTest > 
shouldNotBeEqualWhenMaintainMsDiffer

Build failed in Jenkins: kafka-0.11.0-jdk7 #324

2017-10-23 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-5140: Fix reset integration test

--
[...truncated 2.09 MB...]

org.apache.kafka.common.config.ConfigDefTest > testBaseConfigDefDependents 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringClass 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringClass 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringShort 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringShort 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testInvalidDefault STARTED

org.apache.kafka.common.config.ConfigDefTest > testInvalidDefault PASSED

org.apache.kafka.common.config.ConfigDefTest > toRst STARTED

org.apache.kafka.common.config.ConfigDefTest > toRst PASSED

org.apache.kafka.common.config.ConfigDefTest > testCanAddInternalConfig STARTED

org.apache.kafka.common.config.ConfigDefTest > testCanAddInternalConfig PASSED

org.apache.kafka.common.config.ConfigDefTest > testMissingRequired STARTED

org.apache.kafka.common.config.ConfigDefTest > testMissingRequired PASSED

org.apache.kafka.common.config.ConfigDefTest > 
testParsingEmptyDefaultValueForStringFieldShouldSucceed STARTED

org.apache.kafka.common.config.ConfigDefTest > 
testParsingEmptyDefaultValueForStringFieldShouldSucceed PASSED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringPassword 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringPassword 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringList 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringList 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringLong 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringLong 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringBoolean 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringBoolean 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testNullDefaultWithValidator 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testNullDefaultWithValidator 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testMissingDependentConfigs 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testMissingDependentConfigs 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringDouble 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringDouble 
PASSED

org.apache.kafka.common.config.ConfigDefTest > toEnrichedRst STARTED

org.apache.kafka.common.config.ConfigDefTest > toEnrichedRst PASSED

org.apache.kafka.common.config.ConfigDefTest > testDefinedTwice STARTED

org.apache.kafka.common.config.ConfigDefTest > testDefinedTwice PASSED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringString 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringString 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testNestedClass STARTED

org.apache.kafka.common.config.ConfigDefTest > testNestedClass PASSED

org.apache.kafka.common.config.ConfigDefTest > testBadInputs STARTED

org.apache.kafka.common.config.ConfigDefTest > testBadInputs PASSED

org.apache.kafka.common.config.ConfigDefTest > testValidateMissingConfigKey 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testValidateMissingConfigKey 
PASSED
:clients:determineCommitId UP-TO-DATE
:clients:createVersionFile
:clients:jar UP-TO-DATE
:core:compileJava NO-SOURCE
:core:compileScala UP-TO-DATE
:core:processResources NO-SOURCE
:core:classes UP-TO-DATE
:core:copyDependantLibs
:core:jar
:examples:compileJava:19:
 warning: [deprecation] FetchRequest in kafka.api has been deprecated
import kafka.api.FetchRequest;
^
:20:
 warning: [deprecation] FetchRequestBuilder in kafka.api has been deprecated
import kafka.api.FetchRequestBuilder;
^
:22:
 warning: [deprecation] SimpleConsumer in kafka.javaapi.consumer has been 
deprecated
import kafka.javaapi.consumer.SimpleConsumer;
 ^
3 warnings

:examples:processResources NO-SOURCE
:examples:classes
:examples:checkstyleMain
:examples:compileTestJava NO-SOURCE
:examples:processTestResources NO-SOURCE
:examples:testClasses UP-TO-DATE
:examples:checkstyleTest NO-SOURCE
:examples:findbugsMain
Scanning archives (0 / 19)Scanning archives (1 / 19)Scanning archives (2 / 
19)Scanning archives (3 / 19)Scanning archives 

[GitHub] kafka pull request #4121: Kafka 6042: Use state write lock for delayed txn o...

2017-10-23 Thread rajinisivaram
Github user rajinisivaram closed the pull request at:

https://github.com/apache/kafka/pull/4121


---


[GitHub] kafka pull request #4096: HOTFIX: Poll with zero milliseconds during restora...

2017-10-23 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4096


---


Re: [DISCUSS]: KIP-159: Introducing Rich functions to Streams

2017-10-23 Thread Guozhang Wang
Thanks for the information Jeyhun. I had also forgot about KAFKA-3907 with
this KIP..

Thinking a bit more, I'm now inclined to go with what we agreed before, to
add the commit() call to `RecordContext`. A few minor tweaks on its
implementation:

1. Maybe we can deprecate the `commit()` in ProcessorContext, to enforce
user to consolidate this call as
"processorContext.recordContext().commit()". And internal implementation of
`ProcessorContext.commit()` in `ProcessorContextImpl` is also changed to
this call.

2. Add the `task` reference to the impl class, `ProcessorRecordContext`, so
that it can implement the commit call itself.

3. In the wiki page, the statement that "However, call to a commit() method,
is valid only within RecordContext interface (at least for now), we throw
an exception in ProcessorRecordContext.commit()." and the code snippet
below would need to be updated as well.


Guozhang



On Mon, Oct 23, 2017 at 1:40 PM, Matthias J. Sax 
wrote:

> Fair point. This is a long discussion and I totally forgot that we
> discussed this.
>
> Seems I changed my opinion about including KAFKA-3907...
>
> Happy to hear what others think.
>
>
> -Matthias
>
> On 10/23/17 1:20 PM, Jeyhun Karimov wrote:
> > Hi Matthias,
> >
> > It is probably my bad, the discussion was a bit long in this thread. I
> > proposed the related issue in the related KIP discuss thread [1] and got
> an
> > approval [2,3].
> > Maybe I misunderstood.
> >
> > [1]
> > http://search-hadoop.com/m/Kafka/uyzND19Asmg1GKKXT1?subj=
> Re+DISCUSS+KIP+159+Introducing+Rich+functions+to+Streams
> > [2]
> > http://search-hadoop.com/m/Kafka/uyzND1kpct22GKKXT1?subj=
> Re+DISCUSS+KIP+159+Introducing+Rich+functions+to+Streams
> > [3]
> > http://search-hadoop.com/m/Kafka/uyzND1G6TGIGKKXT1?subj=
> Re+DISCUSS+KIP+159+Introducing+Rich+functions+to+Streams
> >
> >
> > On Mon, Oct 23, 2017 at 8:44 PM Matthias J. Sax 
> > wrote:
> >
> >> Interesting.
> >>
> >> I thought that https://issues.apache.org/jira/browse/KAFKA-4125 is the
> >> main motivation for this KIP :)
> >>
> >> I also think, that we should not expose the full ProcessorContext at DSL
> >> level.
> >>
> >> Thus, overall I am not even sure if we should fix KAFKA-3907 at all.
> >> Manual commits are something DSL users should not worry about -- and if
> >> one really needs this, an advanced user can still insert a dummy
> >> `transform` to request a commit from there.
> >>
> >> -Matthias
> >>
> >>
> >> On 10/18/17 5:39 AM, Jeyhun Karimov wrote:
> >>> Hi,
> >>>
> >>> The main intuition is to solve [1], which is part of this KIP.
> >>> I agree with you that this might not seem semantically correct as we
> are
> >>> not committing record state.
> >>> Alternatively, we can remove commit() from RecordContext and add
> >>> ProcessorContext (which has commit() method) as an extra argument to
> Rich
> >>> methods:
> >>>
> >>> instead of
> >>> public interface RichValueMapper {
> >>> VR apply(final V value,
> >>>  final K key,
> >>>  final RecordContext recordContext);
> >>> }
> >>>
> >>> we can adopt
> >>>
> >>> public interface RichValueMapper {
> >>> VR apply(final V value,
> >>>  final K key,
> >>>  final RecordContext recordContext,
> >>>  final ProcessorContext processorContext);
> >>> }
> >>>
> >>>
> >>> However, in this case, a user can get confused as ProcessorContext and
> >>> RecordContext share some methods with the same name.
> >>>
> >>>
> >>> Cheers,
> >>> Jeyhun
> >>>
> >>>
> >>> [1] https://issues.apache.org/jira/browse/KAFKA-3907
> >>>
> >>>
> >>> On Tue, Oct 17, 2017 at 3:19 AM Guozhang Wang 
> >> wrote:
> >>>
>  Regarding #6 above, I'm still not clear why we would need `commit()`
> in
>  both ProcessorContext and RecordContext, could you elaborate a bit
> more?
> 
>  To me `commit()` is really a processor context not a record context
>  logically: when you call that function, it means we would commit the
> >> state
>  of the whole task up to this processed record, not only that single
> >> record
>  itself.
> 
> 
>  Guozhang
> 
>  On Mon, Oct 16, 2017 at 9:19 AM, Jeyhun Karimov  >
>  wrote:
> 
> > Hi,
> >
> > Thanks for the feedback.
> >
> >
> > 0. RichInitializer definition seems missing.
> >
> >
> >
> > - Fixed.
> >
> >
> >  I'd suggest moving the key parameter in the RichValueXX and
> >> RichReducer
> >> after the value parameters, as well as in the templates; e.g.
> >> public interface RichValueJoiner {
> >> VR apply(final V1 value1, final V2 value2, final K key, final
> >> RecordContext
> >> recordContext);
> >> }
> >
> >
> >
> > - Fixed.
> >
> >
> > 2. Some of the listed functions are not necessary since their pairing
>  APIs
> >> are being deprecated in 1.0 already:
> >>  KGroupedStream groupBy(final RichKeyValueMapper K,
>  ?
> >> super

Re: [DISCUSS] KIP-205: Add getAllKeys() API to ReadOnlyWindowStore

2017-10-23 Thread Guozhang Wang
Thanks Richard for updating the wiki.

Made another pass over it, overall it LGTM. Just a few minor comments:

1) Could you update the title to reflect the function names accordingly?
Generally I think having `all / range` is better in terms of consistency
with key-value windows. I.e. queries with key are named as `get / fetch`
for kv / window stores, and queries without key are named as `range / all`.

2) In the Javadoc, "@throws NullPointerException if null is used for any key"
is not needed as the API does not have any key parameters.


Please feel free to start the voting thread, also at the same time start
implementing the PR in case you may find some impl corner cases that have
not been thought about in the wiki. That would help us discover any
potential blockers earlier.


Guozhang


On Wed, Oct 18, 2017 at 8:42 PM, Richard Yu 
wrote:

> Soliciting more feedback before vote.
>
> On Wed, Oct 18, 2017 at 8:26 PM, Richard Yu 
> wrote:
>
> > Is this KIP close to completion? Because we could start working on the
> > code itself now. (Its at about this stage).
> >
> > On Mon, Oct 16, 2017 at 7:37 PM, Richard Yu 
> > wrote:
> >
> >> As Guozhang Wang mentioned earlier, we want to mirror the structure of
> >> similar Store class (namely KTable). The WindowedStore class might be
> >> unique in itself as it uses fetch() methods, but in my opinion,
> uniformity
> >> should be better suited for simplicity.
> >>
> >> On Mon, Oct 16, 2017 at 11:54 AM, Xavier Léauté 
> >> wrote:
> >>
> >>> Thank you Richard! Do you or Guozhang have any thoughts on my
> suggestions
> >>> to use fetchAll() and fetchAll(timeFrom, timeTo) and reserve the
> "range"
> >>> keyword for when we query a specific range of keys?
> >>>
> >>> Xavier
> >>>
> >>> On Sat, Oct 14, 2017 at 2:32 PM Richard Yu  >
> >>> wrote:
> >>>
> >>> > Thanks for the clarifications, Xavier.
> >>> > I have removed most of the methods except for keys() and all() which
> >>> has
> >>> > been renamed to Guozhang Wang's suggestions.
> >>> >
> >>> > Hope this helps.
> >>> >
> >>> > On Fri, Oct 13, 2017 at 3:28 PM, Xavier Léauté 
> >>> > wrote:
> >>> >
> >>> > > Thanks for the KIP Richard, this is a very useful addition!
> >>> > >
> >>> > > As far as the API changes, I just have a few comments on the
> methods
> >>> that
> >>> > > don't seem directly related to the KIP title, and naming of course
> >>> :).
> >>> > > On the implementation, see my notes further down that will
> hopefully
> >>> > > clarify a few things.
> >>> > >
> >>> > > Regarding the "bonus" methods:
> >>> > > I agree with Guozhang that the KIP lacks proper motivation for
> >>> adding the
> >>> > > min, max, and allLatest methods.
> >>> > > It is also not clear to me what min and max would really mean, what
> >>> > > ordering do we refer to here? Are we first ordering by time, then
> >>> key, or
> >>> > > first by key, then time?
> >>> > > The allLatest method might be useful, but I don't really see how it
> >>> would
> >>> > > be used in practice if we have to scan the entire range of keys for
> >>> all
> >>> > the
> >>> > > state stores, every single time.
> >>> > >
> >>> > > Maybe we could flesh the motivation behind those extra methods, but
> >>> in
> >>> > the
> >>> > > interest of time, and moving the KIP forward it might make sense to
> >>> file
> >>> > a
> >>> > > follow-up once we have more concrete use-cases.
> >>> > >
> >>> > > On naming:
> >>> > > I also agree with Guozhang that "keys()" should be renamed. It
> feels
> >>> a
> >>> > bit
> >>> > > of a misnomer, since it not only returns keys, but also the values.
> >>> > >
> >>> > > As far as what to rename it to, I would argue we already have some
> >>> > > discrepancy between key-value stores using range() vs. window
> stores
> >>> > using
> >>> > > fetch().
> >>> > > I assume we called the window method "fetch" instead of "get"
> >>> because you
> >>> > > might get back more than one window for the requested key.
> >>> > >
> >>> > > If we wanted to make things consistent with both existing key-value
> >>> store
> >>> > > naming and window store naming, we could do the following:
> >>> > > Decide that "all" always refers to the entire range of keys,
> >>> independent
> >>> > of
> >>> > > the window and similarly "range" always refers to a particular
> range
> >>> of
> >>> > > keys, irrespective of the window.
> >>> > > We can then prefix methods with "fetch" to indicate that more than
> >>> one
> >>> > > window may be returned for each key in the range.
> >>> > >
> >>> > > This would give us:
> >>> > > - a new fetchAll() method for all the keys, which makes it clear
> >>> that you
> >>> > > might get back the same key in different windows
> >>> > > - a new fetchAll(timeFrom, timeTo) method to get all the keys in a
> >>> given
> >>> > > time range, again with possibly more than one window per key
> >>> > > - and we'd have to rename fetch(K,K,long, long) to fetchRange(K, K,
> >>> long,
> >>> > > long)  and deprecate the old one to indicate a r

[GitHub] kafka pull request #4032: MINOR: Fix typo

2017-10-23 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4032


---


Re: [DISCUSS] KIP-210: Provide for custom error handling when Kafka Streams fails to produce

2017-10-23 Thread Guozhang Wang
Thanks for the KIP Matt.

Regarding the handle interface of ProductionExceptionHandlerResponse, could
you write it on the wiki also, along with the actual added config names
(e.g. what
https://cwiki.apache.org/confluence/display/KAFKA/KIP-161%3A+streams+deserialization+exception+handlers
described).

The question I had about then handle parameters are around the record,
should it be `ProducerRecord`, or be generics of
`ProducerRecord` or `ProducerRecord`?

Also, should the handle function include the `RecordMetadata` as well in
case it is not null?

We may probably try to write down at least the following handling logic and
see if the given API is sufficient for it: 1) throw exception immediately
to fail fast and stop the world, 2) log the error and drop record and
proceed silently, 3) send such errors to a specific "error" Kafka topic, or
record it as an app-level metrics (
https://kafka.apache.org/documentation/#kafka_streams_monitoring) for
monitoring.

Guozhang



On Fri, Oct 20, 2017 at 5:47 PM, Matt Farmer  wrote:

> I did some more digging tonight.
>
> @Ted: It looks like the deserialization handler uses
> "default.deserialization.exception.handler" for the config name. No
> ".class" on the end. I'm inclined to think this should use
> "default.production.exception.handler".
>
> On Fri, Oct 20, 2017 at 8:22 PM Matt Farmer  wrote:
>
> > Okay, I've dug into this a little bit.
> >
> > I think getting access to the serialized record is possible, and changing
> > the naming and return type is certainly doable. However, because we're
> > hooking into the onCompletion callback we have no guarantee that the
> > ProcessorContext state hasn't changed by the time this particular handler
> > runs. So I think the signature would change to something like:
> >
> > ProductionExceptionHandlerResponse handle(final ProducerRecord<..>
> record,
> > final Exception exception)
> >
> > Would this be acceptable?
> >
> > On Thu, Oct 19, 2017 at 7:33 PM Matt Farmer  wrote:
> >
> >> Ah good idea. Hmmm. I can line up the naming and return type but I’m not
> >> sure if I can get my hands on the context and the record itself without
> >> other changes.
> >>
> >> Let me dig in and follow up here tomorrow.
> >> On Thu, Oct 19, 2017 at 7:14 PM Matthias J. Sax 
> >> wrote:
> >>
> >>> Thanks for the KIP.
> >>>
> >>> Are you familiar with KIP-161?
> >>>
> >>>
> >>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-161%3A+streams+
> deserialization+exception+handlers
> >>>
> >>> I thinks, we should align the design (parameter naming, return types,
> >>> class names etc) of KIP-210 to KIP-161 to get a unified user
> experience.
> >>>
> >>>
> >>>
> >>> -Matthias
> >>>
> >>>
> >>> On 10/18/17 4:20 PM, Matt Farmer wrote:
> >>> > I’ll create the JIRA ticket.
> >>> >
> >>> > I think that config name will work. I’ll update the KIP accordingly.
> >>> > On Wed, Oct 18, 2017 at 6:09 PM Ted Yu  wrote:
> >>> >
> >>> >> Can you create JIRA that corresponds to the KIP ?
> >>> >>
> >>> >> For the new config, how about naming it
> >>> >> production.exception.processor.class
> >>> >> ? This way it is clear that class name should be specified.
> >>> >>
> >>> >> Cheers
> >>> >>
> >>> >> On Wed, Oct 18, 2017 at 2:40 PM, Matt Farmer  wrote:
> >>> >>
> >>> >>> Hello everyone,
> >>> >>>
> >>> >>> This is the discussion thread for the KIP that I just filed here:
> >>> >>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> >>> >>> 210+-+Provide+for+custom+error+handling++when+Kafka+
> >>> >>> Streams+fails+to+produce
> >>> >>>
> >>> >>> Looking forward to getting some feedback from folks about this idea
> >>> and
> >>> >>> working toward a solution we can contribute back. :)
> >>> >>>
> >>> >>> Cheers,
> >>> >>> Matt Farmer
> >>> >>>
> >>> >>
> >>> >
> >>>
> >>>
>



-- 
-- Guozhang


Re: Requesting Wiki Comment Permissions

2017-10-23 Thread Guozhang Wang
Hi Jordan,

I'd recommend leaving comments or asking questions about a wiki page
directly on the mailing list here, as it gets much faster.


Guozhang


On Mon, Oct 23, 2017 at 1:31 PM, Jordan Pilat  wrote:

> Hello,
>
> Would somebody please be so kind as to grant me "comment" permissions on
> the Wiki?
>
> The page in question to which I'd like to add a comment is:
> https://cwiki.apache.org/confluence/display/KAFKA/
> Kafka+data+structures+in+Zookeeper
>
> My confluence username is jrpilat
>
> Thanks!
> - Jordan
>



-- 
-- Guozhang


[jira] [Reopened] (KAFKA-5842) QueryableStateIntegrationTest may fail with JDK 7

2017-10-23 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax reopened KAFKA-5842:


Happened again: 
https://builds.apache.org/blue/organizations/jenkins/kafka-0.11.0-jdk7/detail/kafka-0.11.0-jdk7/323/tests

> QueryableStateIntegrationTest may fail with JDK 7
> -
>
> Key: KAFKA-5842
> URL: https://issues.apache.org/jira/browse/KAFKA-5842
> Project: Kafka
>  Issue Type: Test
>Reporter: Ted Yu
>Priority: Minor
>
> Found the following when running test suite for 0.11.0.1 RC0 :
> {code}
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
> concurrentAccesses FAILED
> java.lang.AssertionError: Key not found one
> at org.junit.Assert.fail(Assert.java:88)
> at 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest.verifyGreaterOrEqual(QueryableStateIntegrationTest.java:893)
> at 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest.concurrentAccesses(QueryableStateIntegrationTest.java:399)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #4123: MINOR: reset state in cleanup, fixes jmx mixin fla...

2017-10-23 Thread xvrl
GitHub user xvrl opened a pull request:

https://github.com/apache/kafka/pull/4123

MINOR: reset state in cleanup, fixes jmx mixin flakiness

@ewencp @ijuma 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/xvrl/kafka fix-jmx-flakiness

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4123.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4123


commit 9efd07d362f68b01722465bbbc640f146e90a4b3
Author: Xavier Léauté 
Date:   2017-10-24T00:36:04Z

reset state in cleanup, fixes jmx mixin flakiness




---


Re: Metadata class doesn't "expose" topics with errors

2017-10-23 Thread Guozhang Wang
Hello Paolo,

The reason we filtered the errors in the topics in the generated Cluster is
that Metadata and its "fetch()" returned Cluster is a common class that is
used among all clients (producer, consumer, connect, streams, admin), and
is treated as a high-level representation of the current snapshot of the
hosted topic information of the cluster, and hence we intentionally exclude
any transient errors in the representation to abstract such issues away
from its users.

As for your implementation on KIP-204, I think just wait-and-retry for the
updated metadata.fetch() Cluster contain the leader information for the
topic is fine: since if a LEADER_NOT_AVAILABLE is returned you'll need to
backoff and retry anyways, right?


Guozhang



On Mon, Oct 23, 2017 at 2:36 AM, Paolo Patierno  wrote:

> Finally another plan could be to use nesting of runnable calls.
>
> The first one for asking metadata (using the MetadataRequest which
> provides us all the errors) and then sending the delete records requests in
> the handleResponse() of such metadata request.
>
>
> Paolo Patierno
> Senior Software Engineer (IoT) @ Red Hat
> Microsoft MVP on Azure & IoT
> Microsoft Azure Advisor
>
> Twitter : @ppatierno
> Linkedin : paolopatierno
> Blog : DevExperience
>
>
> 
> From: Paolo Patierno 
> Sent: Monday, October 23, 2017 9:06 AM
> To: dev@kafka.apache.org
> Subject: Metadata class doesn't "expose" topics with errors
>
> Hi devs,
>
> while developing the KIP-204 (having delete records operation in the "new"
> Admin Client) I'm facing with the following doubt (or maybe a lack of info)
> ...
>
>
> As described by KIP-107 (which implements this feature at protocol level
> and in the "legacy" Admin Client), the request needs to be sent to the
> leader.
>
>
> For both KIPs, the operation has a Map (offset is
> a long in the "legacy" API but it's becoming to be a class in the "new"
> API) and in order to reduce the number of requests to different leaders, my
> code groups partitions having same leader so having a Map Map>.
>
>
> In order to know the leaders I need to request metadata and there are two
> ways for doing that :
>
>
>   *   using something like the producer does with Metadata class, putting
> the topics, request update and waiting for it
>   *   using the low level MetadataRequest and handling the related
> response (which is what the "legacy" API does today)
>
> I noticed that building the Cluster object from the MetadataResponse, the
> topics with errors are skipped and it means that in the final "high level"
> Metadata class (fetching the Cluster object) there is no information about
> them. So with first solution we have no info about topics with errors
> (maybe the only errors I'm able to handle is the "LEADER_NOT_AVAILABLE", if
> leaderFor() on the Cluster returns a null Node).
>
> Is there any specific reason why "topics with errors" are not exposed in
> the Metadata instance ?
> Is the preferred pattern using the low level protocol stuff in such case ?
>
> Thanks
>
>
> Paolo Patierno
> Senior Software Engineer (IoT) @ Red Hat
> Microsoft MVP on Azure & IoT
> Microsoft Azure Advisor
>
> Twitter : @ppatierno
> Linkedin : paolopatierno
> Blog : DevExperience
>



-- 
-- Guozhang


Re: [DISCUSS] KIP-210: Provide for custom error handling when Kafka Streams fails to produce

2017-10-23 Thread Matt Farmer
Thanks for this feedback. I’m at a conference right now and am planning on
updating the KIP again with details from this conversation later this week.

I’ll shoot you a more detailed response then! :)
On Mon, Oct 23, 2017 at 8:16 PM Guozhang Wang  wrote:

> Thanks for the KIP Matt.
>
> Regarding the handle interface of ProductionExceptionHandlerResponse, could
> you write it on the wiki also, along with the actual added config names
> (e.g. what
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-161%3A+streams+deserialization+exception+handlers
> described).
>
> The question I had about then handle parameters are around the record,
> should it be `ProducerRecord`, or be generics of
> `ProducerRecord` or `ProducerRecord Object, ? extends Object>`?
>
> Also, should the handle function include the `RecordMetadata` as well in
> case it is not null?
>
> We may probably try to write down at least the following handling logic and
> see if the given API is sufficient for it: 1) throw exception immediately
> to fail fast and stop the world, 2) log the error and drop record and
> proceed silently, 3) send such errors to a specific "error" Kafka topic, or
> record it as an app-level metrics (
> https://kafka.apache.org/documentation/#kafka_streams_monitoring) for
> monitoring.
>
> Guozhang
>
>
>
> On Fri, Oct 20, 2017 at 5:47 PM, Matt Farmer  wrote:
>
> > I did some more digging tonight.
> >
> > @Ted: It looks like the deserialization handler uses
> > "default.deserialization.exception.handler" for the config name. No
> > ".class" on the end. I'm inclined to think this should use
> > "default.production.exception.handler".
> >
> > On Fri, Oct 20, 2017 at 8:22 PM Matt Farmer  wrote:
> >
> > > Okay, I've dug into this a little bit.
> > >
> > > I think getting access to the serialized record is possible, and
> changing
> > > the naming and return type is certainly doable. However, because we're
> > > hooking into the onCompletion callback we have no guarantee that the
> > > ProcessorContext state hasn't changed by the time this particular
> handler
> > > runs. So I think the signature would change to something like:
> > >
> > > ProductionExceptionHandlerResponse handle(final ProducerRecord<..>
> > record,
> > > final Exception exception)
> > >
> > > Would this be acceptable?
> > >
> > > On Thu, Oct 19, 2017 at 7:33 PM Matt Farmer  wrote:
> > >
> > >> Ah good idea. Hmmm. I can line up the naming and return type but I’m
> not
> > >> sure if I can get my hands on the context and the record itself
> without
> > >> other changes.
> > >>
> > >> Let me dig in and follow up here tomorrow.
> > >> On Thu, Oct 19, 2017 at 7:14 PM Matthias J. Sax <
> matth...@confluent.io>
> > >> wrote:
> > >>
> > >>> Thanks for the KIP.
> > >>>
> > >>> Are you familiar with KIP-161?
> > >>>
> > >>>
> > >>>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-161%3A+streams+
> > deserialization+exception+handlers
> > >>>
> > >>> I thinks, we should align the design (parameter naming, return types,
> > >>> class names etc) of KIP-210 to KIP-161 to get a unified user
> > experience.
> > >>>
> > >>>
> > >>>
> > >>> -Matthias
> > >>>
> > >>>
> > >>> On 10/18/17 4:20 PM, Matt Farmer wrote:
> > >>> > I’ll create the JIRA ticket.
> > >>> >
> > >>> > I think that config name will work. I’ll update the KIP
> accordingly.
> > >>> > On Wed, Oct 18, 2017 at 6:09 PM Ted Yu 
> wrote:
> > >>> >
> > >>> >> Can you create JIRA that corresponds to the KIP ?
> > >>> >>
> > >>> >> For the new config, how about naming it
> > >>> >> production.exception.processor.class
> > >>> >> ? This way it is clear that class name should be specified.
> > >>> >>
> > >>> >> Cheers
> > >>> >>
> > >>> >> On Wed, Oct 18, 2017 at 2:40 PM, Matt Farmer 
> wrote:
> > >>> >>
> > >>> >>> Hello everyone,
> > >>> >>>
> > >>> >>> This is the discussion thread for the KIP that I just filed here:
> > >>> >>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > >>> >>> 210+-+Provide+for+custom+error+handling++when+Kafka+
> > >>> >>> Streams+fails+to+produce
> > >>> >>>
> > >>> >>> Looking forward to getting some feedback from folks about this
> idea
> > >>> and
> > >>> >>> working toward a solution we can contribute back. :)
> > >>> >>>
> > >>> >>> Cheers,
> > >>> >>> Matt Farmer
> > >>> >>>
> > >>> >>
> > >>> >
> > >>>
> > >>>
> >
>
>
>
> --
> -- Guozhang
>


[VOTE] 1.0.0 RC3

2017-10-23 Thread Guozhang Wang
Hello Kafka users, developers and client-developers,

This is the third candidate for release of Apache Kafka 1.0.0. The main PRs
that gets merged in after RC1 are the following:

https://github.com/apache/kafka/commit/dc6bfa553e73ffccd1e604963e076c
78d8ddcd69

It's worth noting that starting in this version we are using a different
version protocol with three digits: *major.minor.bug-fix*

Any and all testing is welcome, but the following areas are worth
highlighting:

1. Client developers should verify that their clients can produce/consume
to/from 1.0.0 brokers (ideally with compressed and uncompressed data).
2. Performance and stress testing. Heroku and LinkedIn have helped with
this in the past (and issues have been found and fixed).
3. End users can verify that their apps work correctly with the new release.

This is a major version release of Apache Kafka. It includes 29 new KIPs.
See the release notes and release plan
(*https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=71764913
*)
for more details. A few feature highlights:

* Java 9 support with significantly faster TLS and CRC32C implementations
* JBOD improvements: disk failure only disables failed disk but not the
broker (KIP-112/KIP-113 part I)
* Controller improvements: reduced logging change to greatly accelerate
admin request handling.
* Newly added metrics across all the modules (KIP-164, KIP-168, KIP-187,
KIP-188, KIP-196)
* Kafka Streams API improvements (KIP-120 / 130 / 138 / 150 / 160 / 161),
and drop compatibility "Evolving" annotations

Release notes for the 1.0.0 release:
*http://home.apache.org/~guozhang/kafka-1.0.0-rc3/RELEASE_NOTES.html
*



*** Please download, test and vote by Friday, October 20, 8pm PT

Kafka's KEYS file containing PGP keys we use to sign the release:
http://kafka.apache.org/KEYS

* Release artifacts to be voted upon (source and binary):
*http://home.apache.org/~guozhang/kafka-1.0.0-rc3/
*

* Maven artifacts to be voted upon:
https://repository.apache.org/content/groups/staging/org/apache/kafka/

* Javadoc:
*http://home.apache.org/~guozhang/kafka-1.0.0-rc3/javadoc/
*

* Tag to be voted upon (off 1.0 branch) is the 1.0.0-rc2 tag:

https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=7774b0da8ead0d9edd1d4b2f7e1cd743af694112


* Documentation:
Note the documentation can't be pushed live due to changes that will not go
live until the release. You can manually verify by downloading
http://home.apache.org/~guozhang/kafka-1.0.0-rc3/kafka_2.11-1.0.0-site-docs.tgz

I will update this thread with up coming Jenkins builds for this RC later,
they are currently being executed and will be done tomorrow.


/**


Thanks,
-- Guozhang


Re: [VOTE] 1.0.0 RC3

2017-10-23 Thread Ted Yu
bq. Tag to be voted upon (off 1.0 branch) is the 1.0.0-rc2 tag:

There seems to be a typo above: 1.0.0-rc3 tag

FYI

On Mon, Oct 23, 2017 at 6:00 PM, Guozhang Wang  wrote:

> Hello Kafka users, developers and client-developers,
>
> This is the third candidate for release of Apache Kafka 1.0.0. The main PRs
> that gets merged in after RC1 are the following:
>
> https://github.com/apache/kafka/commit/dc6bfa553e73ffccd1e604963e076c
> 78d8ddcd69
>
> It's worth noting that starting in this version we are using a different
> version protocol with three digits: *major.minor.bug-fix*
>
> Any and all testing is welcome, but the following areas are worth
> highlighting:
>
> 1. Client developers should verify that their clients can produce/consume
> to/from 1.0.0 brokers (ideally with compressed and uncompressed data).
> 2. Performance and stress testing. Heroku and LinkedIn have helped with
> this in the past (and issues have been found and fixed).
> 3. End users can verify that their apps work correctly with the new
> release.
>
> This is a major version release of Apache Kafka. It includes 29 new KIPs.
> See the release notes and release plan
> (*https://cwiki.apache.org/confluence/pages/viewpage.
> action?pageId=71764913
>  >*)
> for more details. A few feature highlights:
>
> * Java 9 support with significantly faster TLS and CRC32C implementations
> * JBOD improvements: disk failure only disables failed disk but not the
> broker (KIP-112/KIP-113 part I)
> * Controller improvements: reduced logging change to greatly accelerate
> admin request handling.
> * Newly added metrics across all the modules (KIP-164, KIP-168, KIP-187,
> KIP-188, KIP-196)
> * Kafka Streams API improvements (KIP-120 / 130 / 138 / 150 / 160 / 161),
> and drop compatibility "Evolving" annotations
>
> Release notes for the 1.0.0 release:
> *http://home.apache.org/~guozhang/kafka-1.0.0-rc3/RELEASE_NOTES.html
> *
>
>
>
> *** Please download, test and vote by Friday, October 20, 8pm PT
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> http://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> *http://home.apache.org/~guozhang/kafka-1.0.0-rc3/
> *
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/org/apache/kafka/
>
> * Javadoc:
> *http://home.apache.org/~guozhang/kafka-1.0.0-rc3/javadoc/
> *
>
> * Tag to be voted upon (off 1.0 branch) is the 1.0.0-rc2 tag:
>
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=
> 7774b0da8ead0d9edd1d4b2f7e1cd743af694112
>
>
> * Documentation:
> Note the documentation can't be pushed live due to changes that will not go
> live until the release. You can manually verify by downloading
> http://home.apache.org/~guozhang/kafka-1.0.0-rc3/
> kafka_2.11-1.0.0-site-docs.tgz
>
> I will update this thread with up coming Jenkins builds for this RC later,
> they are currently being executed and will be done tomorrow.
>
>
> /**
>
>
> Thanks,
> -- Guozhang
>


[jira] [Created] (KAFKA-6111) Tests for KafkaControllerZkUtils

2017-10-23 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-6111:
--

 Summary: Tests for KafkaControllerZkUtils
 Key: KAFKA-6111
 URL: https://issues.apache.org/jira/browse/KAFKA-6111
 Project: Kafka
  Issue Type: Sub-task
Reporter: Ismael Juma
 Fix For: 1.1.0


It has no tests at the moment and we need to fix that.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[VOTE] KIP-205: Add all() and range() API to ReadOnlyWindowStore

2017-10-23 Thread Richard Yu
Hi all,

I want to propose KIP-205 for the addition of new API. It is about adding
methods similar to those found in ReadOnlyKeyValueStore to the
ReadOnlyWindowStore class. As it appears the discussion has reached a
conclusion, I would like to start the voting process.
https://cwiki.apache.org/confluence/display/KAFKA/KIP-205%3A+Add+all%28%29+and+range%28%29+API+to+ReadOnlyWindowStore

Thanks for your patience!


Jenkins build is back to normal : kafka-trunk-jdk9 #150

2017-10-23 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : kafka-1.0-jdk7 #55

2017-10-23 Thread Apache Jenkins Server
See 




Re: [VOTE] 1.0.0 RC3

2017-10-23 Thread Jaikiran Pai
A minor request about the RC releases in context of Maven - Right now, 
these RC releases are being released into (Apache staging) Maven 
repository using the version number of the final release. So each RC1, 
RC2, RC3 and so on, all end up in Maven repo as 1.0.0 [1]. For using 
these as dependencies in some of our test projects, we end up adding a 
dependency on 1.0.0. In Maven land, a non-snapshot release (like this 
one) is considered "never changing" which means that a locally 
downloaded version of 1.0.0 is given preference to any new updates to 
the same version at a remote repo (like this staging repo). So a 
subsequent test will end up using the local version even if a new RC is 
released.


Could these releases into Maven be made such that they have the same 
version number as the RC number. So something like 1.0.0.RC3 being a 
actual version number in the Maven repo? That way any new RC release 
would mean just changing the version number in our dependencies and 
being sure it pulls in the right version.


[1] 
https://repository.apache.org/content/groups/staging/org/apache/kafka/kafka_2.12/1.0.0/


-Jaikiran


On 24/10/17 6:30 AM, Guozhang Wang wrote:

Hello Kafka users, developers and client-developers,

This is the third candidate for release of Apache Kafka 1.0.0. The main PRs
that gets merged in after RC1 are the following:

https://github.com/apache/kafka/commit/dc6bfa553e73ffccd1e604963e076c
78d8ddcd69

It's worth noting that starting in this version we are using a different
version protocol with three digits: *major.minor.bug-fix*

Any and all testing is welcome, but the following areas are worth
highlighting:

1. Client developers should verify that their clients can produce/consume
to/from 1.0.0 brokers (ideally with compressed and uncompressed data).
2. Performance and stress testing. Heroku and LinkedIn have helped with
this in the past (and issues have been found and fixed).
3. End users can verify that their apps work correctly with the new release.

This is a major version release of Apache Kafka. It includes 29 new KIPs.
See the release notes and release plan
(*https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=71764913
*)
for more details. A few feature highlights:

* Java 9 support with significantly faster TLS and CRC32C implementations
* JBOD improvements: disk failure only disables failed disk but not the
broker (KIP-112/KIP-113 part I)
* Controller improvements: reduced logging change to greatly accelerate
admin request handling.
* Newly added metrics across all the modules (KIP-164, KIP-168, KIP-187,
KIP-188, KIP-196)
* Kafka Streams API improvements (KIP-120 / 130 / 138 / 150 / 160 / 161),
and drop compatibility "Evolving" annotations

Release notes for the 1.0.0 release:
*http://home.apache.org/~guozhang/kafka-1.0.0-rc3/RELEASE_NOTES.html
*



*** Please download, test and vote by Friday, October 20, 8pm PT

Kafka's KEYS file containing PGP keys we use to sign the release:
http://kafka.apache.org/KEYS

* Release artifacts to be voted upon (source and binary):
*http://home.apache.org/~guozhang/kafka-1.0.0-rc3/
*

* Maven artifacts to be voted upon:
https://repository.apache.org/content/groups/staging/org/apache/kafka/

* Javadoc:
*http://home.apache.org/~guozhang/kafka-1.0.0-rc3/javadoc/
*

* Tag to be voted upon (off 1.0 branch) is the 1.0.0-rc2 tag:

https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=7774b0da8ead0d9edd1d4b2f7e1cd743af694112


* Documentation:
Note the documentation can't be pushed live due to changes that will not go
live until the release. You can manually verify by downloading
http://home.apache.org/~guozhang/kafka-1.0.0-rc3/kafka_2.11-1.0.0-site-docs.tgz

I will update this thread with up coming Jenkins builds for this RC later,
they are currently being executed and will be done tomorrow.


/**


Thanks,
-- Guozhang





Re: [kafka-clients] [VOTE] 1.0.0 RC2

2017-10-23 Thread Dana Powers
1.0.0-RC2 passed all kafka-python integration tests. Excited for this
release -- great work everyone!

-Dana

On Tue, Oct 17, 2017 at 9:47 AM, Guozhang Wang  wrote:

> Hello Kafka users, developers and client-developers,
>
> This is the third candidate for release of Apache Kafka 1.0.0. The main
> PRs that gets merged in after RC1 are the following:
>
> https://github.com/apache/kafka/commit/dc6bfa553e73ffccd1e604963e076c
> 78d8ddcd69
>
> It's worth noting that starting in this version we are using a different
> version protocol with three digits: *major.minor.bug-fix*
>
> Any and all testing is welcome, but the following areas are worth
> highlighting:
>
> 1. Client developers should verify that their clients can produce/consume
> to/from 1.0.0 brokers (ideally with compressed and uncompressed data).
> 2. Performance and stress testing. Heroku and LinkedIn have helped with
> this in the past (and issues have been found and fixed).
> 3. End users can verify that their apps work correctly with the new
> release.
>
> This is a major version release of Apache Kafka. It includes 29 new KIPs.
> See the release notes and release plan 
> (*https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=71764913
> *)
> for more details. A few feature highlights:
>
> * Java 9 support with significantly faster TLS and CRC32C implementations
> * JBOD improvements: disk failure only disables failed disk but not the
> broker (KIP-112/KIP-113)
> * Controller improvements: async ZK access for faster administrative
> request handling
> * Newly added metrics across all the modules (KIP-164, KIP-168, KIP-187,
> KIP-188, KIP-196)
> * Kafka Streams API improvements (KIP-120 / 130 / 138 / 150 / 160 / 161),
> and drop compatibility "Evolving" annotations
>
> Release notes for the 1.0.0 release:
> *http://home.apache.org/~guozhang/kafka-1.0.0-rc2/RELEASE_NOTES.html
> *
>
>
>
> *** Please download, test and vote by Friday, October 20, 8pm PT
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> http://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> *http://home.apache.org/~guozhang/kafka-1.0.0-rc2/
> *
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/org/apache/kafka/
>
> * Javadoc:
> *http://home.apache.org/~guozhang/kafka-1.0.0-rc2/javadoc/
> *
>
> * Tag to be voted upon (off 1.0 branch) is the 1.0.0-rc2 tag:
>
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=
> 51d5f12e190a38547839c7d2710c97faaeaca586
>
> * Documentation:
> Note the documentation can't be pushed live due to changes that will not go
> live until the release. You can manually verify by downloading
> http://home.apache.org/~guozhang/kafka-1.0.0-rc2/
> kafka_2.11-1.0.0-site-docs.tgz
>
> * Successful Jenkins builds for the 1.0.0 branch:
> Unit/integration tests: https://builds.apache.org/job/kafka-1.0-jdk7/40/
> System test: https://jenkins.confluent.io/job/system-test-kafka-1.0/6/
>
>
> /**
>
>
> Thanks,
> -- Guozhang
>
> --
> You received this message because you are subscribed to the Google Groups
> "kafka-clients" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kafka-clients+unsubscr...@googlegroups.com.
> To post to this group, send email to kafka-clie...@googlegroups.com.
> Visit this group at https://groups.google.com/group/kafka-clients.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/kafka-clients/CAHwHRrXD0nLUqFV0HV_Mtz5eY%
> 2B2RhXCSk_xX1FPkHV%3D0s6u7pQ%40mail.gmail.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>


[GitHub] kafka pull request #4124: KAFKA-6074 Use ZookeeperClient in ReplicaManager a...

2017-10-23 Thread tedyu
GitHub user tedyu opened a pull request:

https://github.com/apache/kafka/pull/4124

KAFKA-6074 Use ZookeeperClient in ReplicaManager and Partition



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tedyu/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4124.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4124


commit 0164ff44b0e67cbec9e8b56efe6e139ef87e5d69
Author: tedyu 
Date:   2017-10-24T04:51:59Z

KAFKA-6074 Use ZookeeperClient in ReplicaManager and Partition




---


[GitHub] kafka pull request #4085: HOTFIX: poll with zero millis during restoration

2017-10-23 Thread guozhangwang
GitHub user guozhangwang reopened a pull request:

https://github.com/apache/kafka/pull/4085

HOTFIX: poll with zero millis during restoration

Mirror of #4096 for 0.11.01.

During the restoration phase, when thread state is still in 
PARTITION_ASSIGNED not RUNNING yet, call poll() on the normal consumer with 0 
millisecond timeout, to unblock the restoration of other tasks as soon as 
possible.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka KHotfix-0110-restore-only

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4085.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4085


commit 941a567f0388b4c74d095c444165e4315ff5b9df
Author: Guozhang Wang 
Date:   2017-10-17T20:29:15Z

poll with zero millis during restoration

commit 01af809ba31857f6f68e19821d25d3cef5dd62c0
Author: Guozhang Wang 
Date:   2017-10-12T21:18:46Z

dummy

commit 6c7b512cf8fe93661aea55c2c0622b2064be5beb
Author: Guozhang Wang 
Date:   2017-10-19T05:19:02Z

github comments; fixes for double check

commit 2d26b4d64f6dce0f01a06c379af8c5823a8ed5b6
Author: Guozhang Wang 
Date:   2017-10-23T19:49:22Z

rebase from 0.11.0

commit 9b41200e6565af233389d2d7172348a881912cd3
Author: Guozhang Wang 
Date:   2017-10-23T19:53:01Z

github comments

commit c00143a42ae7cd0f460bd1254b7b65105c221e77
Author: Guozhang Wang 
Date:   2017-10-23T20:32:00Z

minor fixes post rebasing

commit d11b0373a8663705979d2e58681a81777657d90a
Author: Guozhang Wang 
Date:   2017-10-23T21:28:24Z

one minor fix

commit 379a727509a08ab0bf69e36af432c664acd355ca
Author: Guozhang Wang 
Date:   2017-10-23T21:30:12Z

cherry-pick fix from trunk




---


[GitHub] kafka pull request #4085: HOTFIX: poll with zero millis during restoration

2017-10-23 Thread guozhangwang
Github user guozhangwang closed the pull request at:

https://github.com/apache/kafka/pull/4085


---


Jenkins build is back to normal : kafka-trunk-jdk7 #2920

2017-10-23 Thread Apache Jenkins Server
See 




Re: [kafka-clients] [VOTE] 1.0.0 RC3

2017-10-23 Thread Dana Powers
+1. passed kafka-python integration tests, and manually verified
producer/consumer on both compressed and non-compressed data.

-Dana

On Mon, Oct 23, 2017 at 6:00 PM, Guozhang Wang  wrote:

> Hello Kafka users, developers and client-developers,
>
> This is the third candidate for release of Apache Kafka 1.0.0. The main
> PRs that gets merged in after RC1 are the following:
>
> https://github.com/apache/kafka/commit/dc6bfa553e73ffccd1e60
> 4963e076c78d8ddcd69
>
> It's worth noting that starting in this version we are using a different
> version protocol with three digits: *major.minor.bug-fix*
>
> Any and all testing is welcome, but the following areas are worth
> highlighting:
>
> 1. Client developers should verify that their clients can produce/consume
> to/from 1.0.0 brokers (ideally with compressed and uncompressed data).
> 2. Performance and stress testing. Heroku and LinkedIn have helped with
> this in the past (and issues have been found and fixed).
> 3. End users can verify that their apps work correctly with the new
> release.
>
> This is a major version release of Apache Kafka. It includes 29 new KIPs.
> See the release notes and release plan 
> (*https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=71764913
> *)
> for more details. A few feature highlights:
>
> * Java 9 support with significantly faster TLS and CRC32C implementations
> * JBOD improvements: disk failure only disables failed disk but not the
> broker (KIP-112/KIP-113 part I)
> * Controller improvements: reduced logging change to greatly accelerate
> admin request handling.
> * Newly added metrics across all the modules (KIP-164, KIP-168, KIP-187,
> KIP-188, KIP-196)
> * Kafka Streams API improvements (KIP-120 / 130 / 138 / 150 / 160 / 161),
> and drop compatibility "Evolving" annotations
>
> Release notes for the 1.0.0 release:
> *http://home.apache.org/~guozhang/kafka-1.0.0-rc3/RELEASE_NOTES.html
> *
>
>
>
> *** Please download, test and vote by Friday, October 20, 8pm PT
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> http://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> *http://home.apache.org/~guozhang/kafka-1.0.0-rc3/
> *
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/org/apache/kafka/
>
> * Javadoc:
> *http://home.apache.org/~guozhang/kafka-1.0.0-rc3/javadoc/
> *
>
> * Tag to be voted upon (off 1.0 branch) is the 1.0.0-rc2 tag:
>
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=
> 7774b0da8ead0d9edd1d4b2f7e1cd743af694112
>
>
> * Documentation:
> Note the documentation can't be pushed live due to changes that will not go
> live until the release. You can manually verify by downloading
> http://home.apache.org/~guozhang/kafka-1.0.0-rc3/
> kafka_2.11-1.0.0-site-docs.tgz
>
> I will update this thread with up coming Jenkins builds for this RC later,
> they are currently being executed and will be done tomorrow.
>
>
> /**
>
>
> Thanks,
> -- Guozhang
>
> --
> You received this message because you are subscribed to the Google Groups
> "kafka-clients" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kafka-clients+unsubscr...@googlegroups.com.
> To post to this group, send email to kafka-clie...@googlegroups.com.
> Visit this group at https://groups.google.com/group/kafka-clients.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/kafka-clients/CAHwHRrU8QrK7cPSRAj7uaEQ1vgnwv
> o8Y5rJxa1-54dLqxLAsHw%40mail.gmail.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>


[GitHub] kafka pull request #4116: KAFKA-6105: load client properties in proper order...

2017-10-23 Thread cnZach
Github user cnZach closed the pull request at:

https://github.com/apache/kafka/pull/4116


---


[GitHub] kafka pull request #4125: KAFKA-6105: load client properties in proper order...

2017-10-23 Thread cnZach
GitHub user cnZach opened a pull request:

https://github.com/apache/kafka/pull/4125

KAFKA-6105: load client properties in proper order for EndToEndLatency tool

Currently, the property file is loaded first, and later a auto generated 
group.id is used:
consumerProps.put(ConsumerConfig.GROUP_ID_CONFIG, "test-group-" + 
System.currentTimeMillis())

so even user gives the group.id in a property file, it is not picked up.

Change it to load client properties in proper order: set default values 
first, then try to load the custom values set in client.properties file.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/cnZach/kafka cnZach_KAFKA-6105

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4125.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4125


commit 448ea9df1f735da5362eb3204e9bd7a133516fb2
Author: Yuexin Zhang 
Date:   2017-10-24T05:48:04Z

load client properties in proper order: set default values first, then try 
to load the custom values set in client.properties file




---


Build failed in Jenkins: kafka-trunk-jdk8 #2165

2017-10-23 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] HOTFIX: Poll with zero milliseconds during restoration phase

[ismael] MINOR: Fix typo in ConsumerCoordinator comment

--
[...truncated 3.80 MB...]

org.apache.kafka.connect.runtime.rest.entities.ConnectorTypeTest > 
testToStringIsLowerCase PASSED

org.apache.kafka.connect.runtime.rest.entities.ConnectorTypeTest > testForValue 
STARTED

org.apache.kafka.connect.runtime.rest.entities.ConnectorTypeTest > testForValue 
PASSED

org.apache.kafka.connect.runtime.TransformationConfigTest > 
testEmbeddedConfigCast STARTED

org.apache.kafka.connect.runtime.TransformationConfigTest > 
testEmbeddedConfigCast PASSED

org.apache.kafka.connect.runtime.TransformationConfigTest > 
testEmbeddedConfigRegexRouter STARTED

org.apache.kafka.connect.runtime.TransformationConfigTest > 
testEmbeddedConfigRegexRouter PASSED

org.apache.kafka.connect.runtime.TransformationConfigTest > 
testEmbeddedConfigSetSchemaMetadata STARTED

org.apache.kafka.connect.runtime.TransformationConfigTest > 
testEmbeddedConfigSetSchemaMetadata PASSED

org.apache.kafka.connect.runtime.TransformationConfigTest > 
testEmbeddedConfigTimestampConverter STARTED

org.apache.kafka.connect.runtime.TransformationConfigTest > 
testEmbeddedConfigTimestampConverter PASSED

org.apache.kafka.connect.runtime.TransformationConfigTest > 
testEmbeddedConfigHoistField STARTED

org.apache.kafka.connect.runtime.TransformationConfigTest > 
testEmbeddedConfigHoistField PASSED

org.apache.kafka.connect.runtime.TransformationConfigTest > 
testEmbeddedConfigMaskField STARTED

org.apache.kafka.connect.runtime.TransformationConfigTest > 
testEmbeddedConfigMaskField PASSED

org.apache.kafka.connect.runtime.TransformationConfigTest > 
testEmbeddedConfigInsertField STARTED

org.apache.kafka.connect.runtime.TransformationConfigTest > 
testEmbeddedConfigInsertField PASSED

org.apache.kafka.connect.runtime.TransformationConfigTest > 
testEmbeddedConfigFlatten STARTED

org.apache.kafka.connect.runtime.TransformationConfigTest > 
testEmbeddedConfigFlatten PASSED

org.apache.kafka.connect.runtime.TransformationConfigTest > 
testEmbeddedConfigReplaceField STARTED

org.apache.kafka.connect.runtime.TransformationConfigTest > 
testEmbeddedConfigReplaceField PASSED

org.apache.kafka.connect.runtime.TransformationConfigTest > 
testEmbeddedConfigTimestampRouter STARTED

org.apache.kafka.connect.runtime.TransformationConfigTest > 
testEmbeddedConfigTimestampRouter PASSED

org.apache.kafka.connect.runtime.TransformationConfigTest > 
testEmbeddedConfigValueToKey STARTED

org.apache.kafka.connect.runtime.TransformationConfigTest > 
testEmbeddedConfigValueToKey PASSED

org.apache.kafka.connect.runtime.TransformationConfigTest > 
testEmbeddedConfigExtractField STARTED

org.apache.kafka.connect.runtime.TransformationConfigTest > 
testEmbeddedConfigExtractField PASSED

org.apache.kafka.connect.runtime.ConnectorConfigTest > unconfiguredTransform 
STARTED

org.apache.kafka.connect.runtime.ConnectorConfigTest > unconfiguredTransform 
PASSED

org.apache.kafka.connect.runtime.ConnectorConfigTest > 
multipleTransformsOneDangling STARTED

org.apache.kafka.connect.runtime.ConnectorConfigTest > 
multipleTransformsOneDangling PASSED

org.apache.kafka.connect.runtime.ConnectorConfigTest > misconfiguredTransform 
STARTED

org.apache.kafka.connect.runtime.ConnectorConfigTest > misconfiguredTransform 
PASSED

org.apache.kafka.connect.runtime.ConnectorConfigTest > noTransforms STARTED

org.apache.kafka.connect.runtime.ConnectorConfigTest > noTransforms PASSED

org.apache.kafka.connect.runtime.ConnectorConfigTest > danglingTransformAlias 
STARTED

org.apache.kafka.connect.runtime.ConnectorConfigTest > danglingTransformAlias 
PASSED

org.apache.kafka.connect.runtime.ConnectorConfigTest > multipleTransforms 
STARTED

org.apache.kafka.connect.runtime.ConnectorConfigTest > multipleTransforms PASSED

org.apache.kafka.connect.runtime.ConnectorConfigTest > singleTransform STARTED

org.apache.kafka.connect.runtime.ConnectorConfigTest > singleTransform PASSED

org.apache.kafka.connect.runtime.ConnectorConfigTest > wrongTransformationType 
STARTED

org.apache.kafka.connect.runtime.ConnectorConfigTest > wrongTransformationType 
PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testPutConnectorConfig STARTED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testPutTaskConfigs STARTED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testPutTaskConfigs PASSED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testCreateConnectorFailedBasicValidation STARTED

org.apache.kafka.connect.runtime.standalone.StandaloneHerderTest > 
testCreateConnectorFailedBasicValidation PASSED

org.apache.kafka.connect.runtime

[GitHub] kafka pull request #3235: MINOR: Running the ConsumerIteratorTest unit tests...

2017-10-23 Thread 10110346
Github user 10110346 closed the pull request at:

https://github.com/apache/kafka/pull/3235


---