Re: [VOTE] KIP-204 : adding records deletion operation to the new Admin Client API

2017-10-31 Thread Paolo Patierno
Hi all,


because I don't see any further discussion around KIP-204 
(https://cwiki.apache.org/confluence/display/KAFKA/KIP-204+%3A+adding+records+deletion+operation+to+the+new+Admin+Client+API)
 and I have already opened a PR with the implementation, can we re-cover the 
vote started on October 18 ?

There are only "non binding" votes up to now.

Thanks,


Paolo Patierno
Senior Software Engineer (IoT) @ Red Hat
Microsoft MVP on Azure & IoT
Microsoft Azure Advisor

Twitter : @ppatierno
Linkedin : paolopatierno
Blog : DevExperience



From: Viktor Somogyi 
Sent: Wednesday, October 18, 2017 10:49 AM
To: dev@kafka.apache.org
Subject: Re: [VOTE] KIP-204 : adding records deletion operation to the new 
Admin Client API

+1 (non-binding)

On Wed, Oct 18, 2017 at 8:23 AM, Manikumar 
wrote:

> + (non-binding)
>
>
> Thanks,
> Manikumar
>
> On Tue, Oct 17, 2017 at 7:42 AM, Dong Lin  wrote:
>
> > Thanks for the KIP. +1 (non-binding)
> >
> > On Wed, Oct 11, 2017 at 2:27 AM, Ted Yu  wrote:
> >
> > > +1
> > >
> > > On Mon, Oct 2, 2017 at 10:51 PM, Paolo Patierno 
> > > wrote:
> > >
> > > > Hi all,
> > > >
> > > > I didn't see any further discussion around this KIP, so I'd like to
> > start
> > > > the vote for it.
> > > >
> > > > Just for reference : https://cwiki.apache.org/
> > > > confluence/display/KAFKA/KIP-204+%3A+adding+records+
> > > > deletion+operation+to+the+new+Admin+Client+API
> > > >
> > > >
> > > > Thanks,
> > > >
> > > > Paolo Patierno
> > > > Senior Software Engineer (IoT) @ Red Hat
> > > > Microsoft MVP on Azure & IoT
> > > > Microsoft Azure Advisor
> > > >
> > > > Twitter : @ppatierno
> > > > Linkedin : paolopatierno
> > > > Blog : DevExperience
> > > >
> > >
> >
>


[jira] [Created] (KAFKA-6152) Support ExpanderSketch algorithm for space and time efficient stream processing.

2017-10-31 Thread Edmon Begoli (JIRA)
Edmon Begoli created KAFKA-6152:
---

 Summary: Support ExpanderSketch algorithm for space and time 
efficient stream processing.
 Key: KAFKA-6152
 URL: https://issues.apache.org/jira/browse/KAFKA-6152
 Project: Kafka
  Issue Type: New Feature
  Components: core
Reporter: Edmon Begoli
 Attachments: larsen2016.pdf

Support a new ExpanderSketch algorithm (Larsen et al., 2016) based on 
cluster-preserving clustering and considered the best contemporary streaming 
algorithm (Quanta, 2017). 

It achieves optimal O(ε^{p}log_n) space, O(log_n) update time, and fast 
O(ε^{p}poly(log_n)) query time, and whp correctness

Larsen, K. G., Nelson, J., Nguyên, H. L., & Thorup, M. (2016, October). Heavy 
hitters via cluster-preserving clustering. In Foundations of Computer Science 
(FOCS), 2016 IEEE 57th Annual Symposium on (pp. 61-70). IEEE.
https://arxiv.org/abs/1604.01357

Hartnett, K., (2017, October). Best-Ever Algorithm Found for Huge Streams of 
Data. Quanta Magazine, October 2017, online at: 
https://www.quantamagazine.org/best-ever-algorithm-found-for-huge-streams-of-data-20171024/



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [VOTE] KIP-171 - Extend Consumer Group Reset Offset for Stream Application

2017-10-31 Thread Damian Guy
Thanks for the KIP - +1 (binding)

On Mon, 23 Oct 2017 at 18:39 Guozhang Wang  wrote:

> Thanks Jorge for driving this KIP! +1 (binding).
>
>
> Guozhang
>
> On Mon, Oct 16, 2017 at 2:11 PM, Bill Bejeck  wrote:
>
> > +1
> >
> > Thanks,
> > Bill
> >
> > On Fri, Oct 13, 2017 at 6:36 PM, Ted Yu  wrote:
> >
> > > +1
> > >
> > > On Fri, Oct 13, 2017 at 3:32 PM, Matthias J. Sax <
> matth...@confluent.io>
> > > wrote:
> > >
> > > > +1
> > > >
> > > >
> > > >
> > > > On 9/11/17 3:04 PM, Jorge Esteban Quilcate Otoya wrote:
> > > > > Hi All,
> > > > >
> > > > > It seems that there is no further concern with the KIP-171.
> > > > > At this point we would like to start the voting process.
> > > > >
> > > > > The KIP can be found here:
> > > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > 171+-+Extend+Consumer+Group+Reset+Offset+for+Stream+Application
> > > > >
> > > > >
> > > > > Thanks!
> > > > >
> > > >
> > > >
> > >
> >
>
>
>
> --
> -- Guozhang
>


[jira] [Created] (KAFKA-6153) Kafka Transactional Messaging does not work on windows but on linux

2017-10-31 Thread Changhai Han (JIRA)
Changhai Han created KAFKA-6153:
---

 Summary: Kafka Transactional Messaging does not work on windows 
but on linux
 Key: KAFKA-6153
 URL: https://issues.apache.org/jira/browse/KAFKA-6153
 Project: Kafka
  Issue Type: Bug
  Components: consumer, producer 
Affects Versions: 0.11.0.1
Reporter: Changhai Han
Priority: Critical


As mentioned in title, the kafka transaction messaging does not work on windows 
but on linux.

The code is like below:
 stringProducer.initTransactions();

while(true){
ConsumerRecords records = stringConsumer.poll(2000);

if(!records.isEmpty()){
stringProducer.beginTransaction();
try{
for(ConsumerRecord record : records){
LOGGER.info(record.value().toString());
stringProducer.send(new ProducerRecord("kafka-test-out", record.value().toString()));
}

stringProducer.commitTransaction();
}catch (ProducerFencedException e){
LOGGER.warn(e.getMessage());
stringProducer.close();
stringConsumer.close();
}catch (KafkaException e){
LOGGER.warn(e.getMessage());
stringProducer.abortTransaction();
}
}
}

When I debug it, it seems to it stuck on committing the transaction. Does 
anyone also experience the same thing? Is there any specific configs that i 
need to add in the producer config? Thanks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka-site pull request #106: MINOR: Fix typo in checkstyle command

2017-10-31 Thread makearl
GitHub user makearl opened a pull request:

https://github.com/apache/kafka-site/pull/106

MINOR: Fix typo in checkstyle command

Fix a typo in the [Coding 
Guidelines](http://kafka.apache.org/coding-guide.html) for Kafka Streams 
checkstyle commands

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/makearl/kafka-site fix-typo

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka-site/pull/106.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #106


commit b89055d8df20c2d3f3832f3e00320fef649cec8a
Author: makearl 
Date:   2017-10-31T12:42:28Z

Fix typo in checkstyle command




---


Re: [VOTE] KIP-204 : adding records deletion operation to the new Admin Client API

2017-10-31 Thread Bill Bejeck
+1

Thanks,
Bill

On Tue, Oct 31, 2017 at 4:36 AM, Paolo Patierno  wrote:

> Hi all,
>
>
> because I don't see any further discussion around KIP-204 (
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 204+%3A+adding+records+deletion+operation+to+the+new+Admin+Client+API)
> and I have already opened a PR with the implementation, can we re-cover the
> vote started on October 18 ?
>
> There are only "non binding" votes up to now.
>
> Thanks,
>
>
> Paolo Patierno
> Senior Software Engineer (IoT) @ Red Hat
> Microsoft MVP on Azure & IoT
> Microsoft Azure Advisor
>
> Twitter : @ppatierno
> Linkedin : paolopatierno
> Blog : DevExperience
>
>
> 
> From: Viktor Somogyi 
> Sent: Wednesday, October 18, 2017 10:49 AM
> To: dev@kafka.apache.org
> Subject: Re: [VOTE] KIP-204 : adding records deletion operation to the new
> Admin Client API
>
> +1 (non-binding)
>
> On Wed, Oct 18, 2017 at 8:23 AM, Manikumar 
> wrote:
>
> > + (non-binding)
> >
> >
> > Thanks,
> > Manikumar
> >
> > On Tue, Oct 17, 2017 at 7:42 AM, Dong Lin  wrote:
> >
> > > Thanks for the KIP. +1 (non-binding)
> > >
> > > On Wed, Oct 11, 2017 at 2:27 AM, Ted Yu  wrote:
> > >
> > > > +1
> > > >
> > > > On Mon, Oct 2, 2017 at 10:51 PM, Paolo Patierno 
> > > > wrote:
> > > >
> > > > > Hi all,
> > > > >
> > > > > I didn't see any further discussion around this KIP, so I'd like to
> > > start
> > > > > the vote for it.
> > > > >
> > > > > Just for reference : https://cwiki.apache.org/
> > > > > confluence/display/KAFKA/KIP-204+%3A+adding+records+
> > > > > deletion+operation+to+the+new+Admin+Client+API
> > > > >
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Paolo Patierno
> > > > > Senior Software Engineer (IoT) @ Red Hat
> > > > > Microsoft MVP on Azure & IoT
> > > > > Microsoft Azure Advisor
> > > > >
> > > > > Twitter : @ppatierno
> > > > > Linkedin : paolopatierno
> > > > > Blog : DevExperience
> > > > >
> > > >
> > >
> >
>


Re: [VOTE] 1.0.0 RC4

2017-10-31 Thread Ismael Juma
+1 (binding) from me. Tested the quickstart with the source and binary
(Scala 2.12) artifacts, ran the tests on the source artifact and verified
some signatures and hashes on source and binary (Scala 2.11) artifacts.

Thanks for running the release, Guozhang!

Ismael

On Fri, Oct 27, 2017 at 6:28 PM, Guozhang Wang  wrote:

> Hello Kafka users, developers and client-developers,
>
> This is the fifth candidate for release of Apache Kafka 1.0.0. The main PRs
> that gets merged in after RC3 are the following:
>
> *https://github.com/apache/kafka/commit/def1a768a6301c14ad6611358716ab
> 03de04e76b
>  03de04e76b>*
>
> *https://github.com/apache/kafka/commit/b9fc0f2e6892062efa1fff0c6f7bfc
> 683c8ba7ab
>  683c8ba7ab>*
>
> *https://github.com/apache/kafka/commit/a51fdcd2ee7efbd14857448a2fb7ec
> b71531e1f9
>  b71531e1f9>*
>
> *https://github.com/apache/kafka/commit/109a60c77a56d4afed488c3ba35dc8
> 459fde15ce
>  459fde15ce>*
>
> It's worth noting that starting in this version we are using a different
> version protocol with three digits: *major.minor.bug-fix*
>
> Any and all testing is welcome, but the following areas are worth
> highlighting:
>
> 1. Client developers should verify that their clients can produce/consume
> to/from 1.0.0 brokers (ideally with compressed and uncompressed data).
> 2. Performance and stress testing. Heroku and LinkedIn have helped with
> this in the past (and issues have been found and fixed).
> 3. End users can verify that their apps work correctly with the new
> release.
>
> This is a major version release of Apache Kafka. It includes 29 new KIPs.
> See the release notes and release plan
> (*https://cwiki.apache.org/confluence/pages/viewpage.
> action?pageId=71764913
>  >*)
> for more details. A few feature highlights:
>
> * Java 9 support with significantly faster TLS and CRC32C implementations
> * JBOD improvements: disk failure only disables failed disk but not the
> broker (KIP-112/KIP-113 part I)
> * Controller improvements: reduced logging change to greatly accelerate
> admin request handling.
> * Newly added metrics across all the modules (KIP-164, KIP-168, KIP-187,
> KIP-188, KIP-196)
> * Kafka Streams API improvements (KIP-120 / 130 / 138 / 150 / 160 / 161),
> and drop compatibility "Evolving" annotations
>
> Release notes for the 1.0.0 release:
> *http://home.apache.org/~guozhang/kafka-1.0.0-rc4/RELEASE_NOTES.html
> *
>
>
>
> *** Please download, test and vote by Tuesday, October 31, 8pm PT
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> http://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> *http://home.apache.org/~guozhang/kafka-1.0.0-rc4/
> *
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/org/apache/kafka/
>
> * Javadoc:
> *http://home.apache.org/~guozhang/kafka-1.0.0-rc4/javadoc/
> *
>
> * Tag to be voted upon (off 1.0 branch) is the 1.0.0-rc4 tag:
>
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=
> d4a3919e408e444dde5db5a261c6f912cb8475a2
>
> * Documentation:
> Note the documentation can't be pushed live due to changes that will not go
> live until the release. You can manually verify by downloading
> http://home.apache.org/~guozhang/kafka-1.0.0-rc4/
> kafka_2.11-1.0.0-site-docs.tgz
>
>
> The Jenkins builders for this RC can now be found here:
>
> System test (still running):
> *https://jenkins.confluent.io/job/system-test-kafka-1.0/18/
> *
> Unit test: *https://builds.apache.org/job/kafka-1.0-jdk7/61/
> *
>
>
> /**
>
>
> Thanks,
> -- Guozhang
>


Re: [VOTE] 1.0.0 RC4

2017-10-31 Thread Manikumar
+1 (non-binding). Verified quickstart, ran producer/consumer perf scripts,
streams quickstart
ran tests on src distribution.

On Tue, Oct 31, 2017 at 8:42 PM, Ismael Juma  wrote:

> +1 (binding) from me. Tested the quickstart with the source and binary
> (Scala 2.12) artifacts, ran the tests on the source artifact and verified
> some signatures and hashes on source and binary (Scala 2.11) artifacts.
>
> Thanks for running the release, Guozhang!
>
> Ismael
>
> On Fri, Oct 27, 2017 at 6:28 PM, Guozhang Wang  wrote:
>
> > Hello Kafka users, developers and client-developers,
> >
> > This is the fifth candidate for release of Apache Kafka 1.0.0. The main
> PRs
> > that gets merged in after RC3 are the following:
> >
> > *https://github.com/apache/kafka/commit/def1a768a6301c14ad6611358716ab
> > 03de04e76b
> >  > 03de04e76b>*
> >
> > *https://github.com/apache/kafka/commit/b9fc0f2e6892062efa1fff0c6f7bfc
> > 683c8ba7ab
> >  > 683c8ba7ab>*
> >
> > *https://github.com/apache/kafka/commit/a51fdcd2ee7efbd14857448a2fb7ec
> > b71531e1f9
> >  > b71531e1f9>*
> >
> > *https://github.com/apache/kafka/commit/109a60c77a56d4afed488c3ba35dc8
> > 459fde15ce
> >  > 459fde15ce>*
> >
> > It's worth noting that starting in this version we are using a different
> > version protocol with three digits: *major.minor.bug-fix*
> >
> > Any and all testing is welcome, but the following areas are worth
> > highlighting:
> >
> > 1. Client developers should verify that their clients can produce/consume
> > to/from 1.0.0 brokers (ideally with compressed and uncompressed data).
> > 2. Performance and stress testing. Heroku and LinkedIn have helped with
> > this in the past (and issues have been found and fixed).
> > 3. End users can verify that their apps work correctly with the new
> > release.
> >
> > This is a major version release of Apache Kafka. It includes 29 new KIPs.
> > See the release notes and release plan
> > (*https://cwiki.apache.org/confluence/pages/viewpage.
> > action?pageId=71764913
> >  pageId=71764913
> > >*)
> > for more details. A few feature highlights:
> >
> > * Java 9 support with significantly faster TLS and CRC32C implementations
> > * JBOD improvements: disk failure only disables failed disk but not the
> > broker (KIP-112/KIP-113 part I)
> > * Controller improvements: reduced logging change to greatly accelerate
> > admin request handling.
> > * Newly added metrics across all the modules (KIP-164, KIP-168, KIP-187,
> > KIP-188, KIP-196)
> > * Kafka Streams API improvements (KIP-120 / 130 / 138 / 150 / 160 / 161),
> > and drop compatibility "Evolving" annotations
> >
> > Release notes for the 1.0.0 release:
> > *http://home.apache.org/~guozhang/kafka-1.0.0-rc4/RELEASE_NOTES.html
> > *
> >
> >
> >
> > *** Please download, test and vote by Tuesday, October 31, 8pm PT
> >
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> > http://kafka.apache.org/KEYS
> >
> > * Release artifacts to be voted upon (source and binary):
> > *http://home.apache.org/~guozhang/kafka-1.0.0-rc4/
> > *
> >
> > * Maven artifacts to be voted upon:
> > https://repository.apache.org/content/groups/staging/org/apache/kafka/
> >
> > * Javadoc:
> > *http://home.apache.org/~guozhang/kafka-1.0.0-rc4/javadoc/
> > *
> >
> > * Tag to be voted upon (off 1.0 branch) is the 1.0.0-rc4 tag:
> >
> > https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=
> > d4a3919e408e444dde5db5a261c6f912cb8475a2
> >
> > * Documentation:
> > Note the documentation can't be pushed live due to changes that will not
> go
> > live until the release. You can manually verify by downloading
> > http://home.apache.org/~guozhang/kafka-1.0.0-rc4/
> > kafka_2.11-1.0.0-site-docs.tgz
> >
> >
> > The Jenkins builders for this RC can now be found here:
> >
> > System test (still running):
> > *https://jenkins.confluent.io/job/system-test-kafka-1.0/18/
> > *
> > Unit test: *https://builds.apache.org/job/kafka-1.0-jdk7/61/
> > *
> >
> >
> > /**
> >
> >
> > Thanks,
> > -- Guozhang
> >
>


Re: [VOTE] 1.0.0 RC4

2017-10-31 Thread Ted Yu
+1 (non-binding)

Verified signatures.
Ran test suite.

On Tue, Oct 31, 2017 at 8:53 AM, Manikumar 
wrote:

> +1 (non-binding). Verified quickstart, ran producer/consumer perf scripts,
> streams quickstart
> ran tests on src distribution.
>
> On Tue, Oct 31, 2017 at 8:42 PM, Ismael Juma  wrote:
>
> > +1 (binding) from me. Tested the quickstart with the source and binary
> > (Scala 2.12) artifacts, ran the tests on the source artifact and verified
> > some signatures and hashes on source and binary (Scala 2.11) artifacts.
> >
> > Thanks for running the release, Guozhang!
> >
> > Ismael
> >
> > On Fri, Oct 27, 2017 at 6:28 PM, Guozhang Wang 
> wrote:
> >
> > > Hello Kafka users, developers and client-developers,
> > >
> > > This is the fifth candidate for release of Apache Kafka 1.0.0. The main
> > PRs
> > > that gets merged in after RC3 are the following:
> > >
> > > *https://github.com/apache/kafka/commit/def1a768a6301c14ad6611358716ab
> > > 03de04e76b
> > >  > > 03de04e76b>*
> > >
> > > *https://github.com/apache/kafka/commit/b9fc0f2e6892062efa1fff0c6f7bfc
> > > 683c8ba7ab
> > >  > > 683c8ba7ab>*
> > >
> > > *https://github.com/apache/kafka/commit/a51fdcd2ee7efbd14857448a2fb7ec
> > > b71531e1f9
> > >  > > b71531e1f9>*
> > >
> > > *https://github.com/apache/kafka/commit/109a60c77a56d4afed488c3ba35dc8
> > > 459fde15ce
> > >  > > 459fde15ce>*
> > >
> > > It's worth noting that starting in this version we are using a
> different
> > > version protocol with three digits: *major.minor.bug-fix*
> > >
> > > Any and all testing is welcome, but the following areas are worth
> > > highlighting:
> > >
> > > 1. Client developers should verify that their clients can
> produce/consume
> > > to/from 1.0.0 brokers (ideally with compressed and uncompressed data).
> > > 2. Performance and stress testing. Heroku and LinkedIn have helped with
> > > this in the past (and issues have been found and fixed).
> > > 3. End users can verify that their apps work correctly with the new
> > > release.
> > >
> > > This is a major version release of Apache Kafka. It includes 29 new
> KIPs.
> > > See the release notes and release plan
> > > (*https://cwiki.apache.org/confluence/pages/viewpage.
> > > action?pageId=71764913
> > >  > pageId=71764913
> > > >*)
> > > for more details. A few feature highlights:
> > >
> > > * Java 9 support with significantly faster TLS and CRC32C
> implementations
> > > * JBOD improvements: disk failure only disables failed disk but not the
> > > broker (KIP-112/KIP-113 part I)
> > > * Controller improvements: reduced logging change to greatly accelerate
> > > admin request handling.
> > > * Newly added metrics across all the modules (KIP-164, KIP-168,
> KIP-187,
> > > KIP-188, KIP-196)
> > > * Kafka Streams API improvements (KIP-120 / 130 / 138 / 150 / 160 /
> 161),
> > > and drop compatibility "Evolving" annotations
> > >
> > > Release notes for the 1.0.0 release:
> > > *http://home.apache.org/~guozhang/kafka-1.0.0-rc4/RELEASE_NOTES.html
> > > *
> > >
> > >
> > >
> > > *** Please download, test and vote by Tuesday, October 31, 8pm PT
> > >
> > > Kafka's KEYS file containing PGP keys we use to sign the release:
> > > http://kafka.apache.org/KEYS
> > >
> > > * Release artifacts to be voted upon (source and binary):
> > > *http://home.apache.org/~guozhang/kafka-1.0.0-rc4/
> > > *
> > >
> > > * Maven artifacts to be voted upon:
> > > https://repository.apache.org/content/groups/staging/org/apache/kafka/
> > >
> > > * Javadoc:
> > > *http://home.apache.org/~guozhang/kafka-1.0.0-rc4/javadoc/
> > > *
> > >
> > > * Tag to be voted upon (off 1.0 branch) is the 1.0.0-rc4 tag:
> > >
> > > https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=
> > > d4a3919e408e444dde5db5a261c6f912cb8475a2
> > >
> > > * Documentation:
> > > Note the documentation can't be pushed live due to changes that will
> not
> > go
> > > live until the release. You can manually verify by downloading
> > > http://home.apache.org/~guozhang/kafka-1.0.0-rc4/
> > > kafka_2.11-1.0.0-site-docs.tgz
> > >
> > >
> > > The Jenkins builders for this RC can now be found here:
> > >
> > > System test (still running):
> > > *https://jenkins.confluent.io/job/system-test-kafka-1.0/18/
> > > *
> > > Unit test: *https://builds.apache.org/job/kafka-1.0-jdk7/61/
> > > *
> > >
> > >
> > > /**
> > >
> > 

[GitHub] kafka pull request #4163: MINOR: build.gradle: sourceCompatibility, targetCo...

2017-10-31 Thread cmccabe
GitHub user cmccabe opened a pull request:

https://github.com/apache/kafka/pull/4163

MINOR: build.gradle: sourceCompatibility, targetCompatibility to allp…

…rojects

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/cmccabe/kafka gradle2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4163.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4163


commit 8e0442a003aedbf24137dd7ebb41b3057f40c804
Author: Colin P. Mccabe 
Date:   2017-10-31T16:31:27Z

MINOR: build.gradle: sourceCompatibility, targetCompatibility to allprojects




---


[GitHub] kafka pull request #4147: MINOR: Fix inconsistency in StopReplica/LeaderAndI...

2017-10-31 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4147


---


[jira] [Created] (KAFKA-6154) Transient failure TransactionsBounceTest.testBrokerFailure

2017-10-31 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-6154:
--

 Summary: Transient failure TransactionsBounceTest.testBrokerFailure
 Key: KAFKA-6154
 URL: https://issues.apache.org/jira/browse/KAFKA-6154
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Gustafson


{code}
java.lang.AssertionError: Out of order messages detected 
expected: but was:
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:118)
at 
kafka.api.TransactionsBounceTest$$anonfun$testBrokerFailure$8.apply(TransactionsBounceTest.scala:140)
at 
kafka.api.TransactionsBounceTest$$anonfun$testBrokerFailure$8.apply(TransactionsBounceTest.scala:139)
at 
scala.collection.mutable.HashMap$$anon$2$$anonfun$foreach$3.apply(HashMap.scala:139)
at 
scala.collection.mutable.HashMap$$anon$2$$anonfun$foreach$3.apply(HashMap.scala:139)
at 
scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:236)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
at scala.collection.mutable.HashMap$$anon$2.foreach(HashMap.scala:139)
at 
kafka.api.TransactionsBounceTest.testBrokerFailure(TransactionsBounceTest.scala:139)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at sun.reflect.GeneratedMethodAccessor115.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
at com.sun.proxy.$Proxy1.processTestClass(Unknown Source)
at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:108)
at sun.reflect.GeneratedMethodAccessor114.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(Me

[GitHub] kafka pull request #4146: MINOR: Tighten up locking when aborting expired tr...

2017-10-31 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4146


---


[GitHub] kafka pull request #4161: Adding lighthouse logos and nav bar

2017-10-31 Thread manjuapu
Github user manjuapu closed the pull request at:

https://github.com/apache/kafka/pull/4161


---


[GitHub] kafka pull request #4164: Adding Trivago logo

2017-10-31 Thread manjuapu
GitHub user manjuapu opened a pull request:

https://github.com/apache/kafka/pull/4164

Adding Trivago logo

@guozhangwang Please review

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/confluentinc/kafka ny-trivago-logos

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4164.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4164


commit 5c58e158916b7a3c3c15064d75acd551a6765695
Author: Manjula K 
Date:   2017-10-31T17:26:59Z

Adding Trivago logo




---


Re: [VOTE] KIP-214: Add zookeeper.max.in.flight.requests config to the broker

2017-10-31 Thread Becket Qin
+1, Thanks for the KIP.

On Mon, Oct 30, 2017 at 3:37 PM, Jeff Widman  wrote:

> +1 (non-binding)
>
> Thanks for putting the work in to benchmark various defaults.
>
> On Mon, Oct 30, 2017 at 3:05 PM, Ismael Juma  wrote:
>
> > Thanks for the KIP, +1 (binding).
> >
> > On 27 Oct 2017 6:15 pm, "Onur Karaman" 
> > wrote:
> >
> > > I'd like to start the vote for KIP-214: Add
> > > zookeeper.max.in.flight.requests config to the broker
> > >
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > 214%3A+Add+zookeeper.max.in.flight.requests+config+to+the+broker
> > >
> > > - Onur
> > >
> >
>
>
>
> --
>
> *Jeff Widman*
> jeffwidman.com  | 740-WIDMAN-J (943-6265)
> <><
>


Build failed in Jenkins: kafka-trunk-jdk9 #162

2017-10-31 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Fix inconsistency in StopReplica/LeaderAndIsr error counts

[jason] MINOR: Tighten up locking when aborting expired transactions

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H25 (couchdbtest ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 3c9e30a2f71c83b7efd45a65ffb5df5a80f48d19 
(refs/remotes/origin/trunk)
Commit message: "MINOR: Tighten up locking when aborting expired transactions"
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 3c9e30a2f71c83b7efd45a65ffb5df5a80f48d19
 > git rev-list c7ab3efcbe5d34c28e19a5a6a59962c2abfd2235 # timeout=10
Setting 
GRADLE_3_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_3.4-rc-2
[kafka-trunk-jdk9] $ /bin/bash -xe /tmp/jenkins4513261582621850564.sh
+ rm -rf 
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_3.4-rc-2/bin/gradle

FAILURE: Build failed with an exception.

* What went wrong:
Could not determine java version from '9.0.1'.

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_3_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_3.4-rc-2
ERROR: Step ?Publish JUnit test result report? failed: Test reports were found 
but none of them are new. Did tests run? 
For example, 

 is 4 days 2 hr old

Setting 
GRADLE_3_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_3.4-rc-2
Not sending mail to unregistered user wangg...@gmail.com


[GitHub] kafka pull request #4165: KAFKA 6086: Provide for custom error handling when...

2017-10-31 Thread farmdawgnation
GitHub user farmdawgnation opened a pull request:

https://github.com/apache/kafka/pull/4165

KAFKA 6086: Provide for custom error handling when Kafka Streams fails to 
produce

This PR creates and implements the `ProductionExceptionHandler` as 
described in 
[KIP-210](https://cwiki.apache.org/confluence/display/KAFKA/KIP-210+-+Provide+for+custom+error+handling++when+Kafka+Streams+fails+to+produce).

I've additionally provided some default implementations: 
`AlwaysFailProductionExceptionHandler` and 
`AlwaysContinueProductionExceptionHandler`. I fixed various compile errors in 
the tests that resulted from my changing of method signatures, but aside from 
that haven't gotten around to adding new tests for this functionality. I would 
be specifically interested in suggestions for the kinds of tests the committers 
would like to see. Barring any different suggestions, I'm probably going to add 
a few cases to `RecordCollectorTest`, I think, that exercise the always 
continue and always fail options to assert they do what they advertise?

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/farmdawgnation/kafka msf/kafka-6086

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4165.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4165


commit 679c1e4e612f382a5d993144f1dc234821fe0d43
Author: Matt Farmer 
Date:   2017-10-21T00:37:52Z

Implement ProductionExceptionHandler

This interface will be used to specify how to behave when there is an
exception while producing a result record to Kafka.

commit cfd357b2e9b3270393abd72813202f2a7cf950b5
Author: Matt Farmer 
Date:   2017-10-31T17:00:00Z

Provide an AlwaysFailProductionExceptionHandler.

This is a default implementation of the production exception handler
that will always instruct Streams to fail when there is an error
producing.

This implementation is consistent with the behavior before KIP-210 was
implemented.

commit 563933ca99838d7408e882f4114c4911cf1e5b59
Author: Matt Farmer 
Date:   2017-10-31T17:02:22Z

Provide an AlwaysContinueProductionExceptionHandler

This production exception handler will always continue processing in
light of errors producing result records.

commit fbdd28507005557ae83e7809830667acd7f21b01
Author: Matt Farmer 
Date:   2017-10-31T17:03:13Z

Add config declaration for production exception handler

commit 7ed622f08216aa8f4c8f2c988b58a29c997a66be
Author: Matt Farmer 
Date:   2017-10-31T17:12:16Z

Remove unused import

commit d85172557f41b6a31599a3d8bd127ecdbd520413
Author: Matt Farmer 
Date:   2017-10-31T17:19:46Z

Create the ProductionExceptionHandler, provide to RecordCollectorImpl

commit a856f4624e16ed286f5ba0d53b0b2096d7ef3c67
Author: Matt Farmer 
Date:   2017-10-31T17:26:33Z

Consult ProductionExceptionHandler when there are issues producing

When an exception is returned to the onCompletion callback, consult the
ProductionExceptionHandler when determining whether or not to fail the
application. If the response is FAIL, preserve the existing behavior. If
the response is not FAIL, simply log a warning and carry on.

commit f1f1d371d344b9a3e15385dbb1ea880280291c43
Author: Matt Farmer 
Date:   2017-10-31T17:33:17Z

Invoke ProductionExceptionHandler for exceptions outside of onCompletion

This permits the ProductionExceptionHandler to make decisions about
whether or not to try to continue when any kind of exception results
from invoking producer.send.

commit f3dc407bf5629b6bdaa177ef2a8d3cdd39c9cda7
Author: Matt Farmer 
Date:   2017-10-31T17:50:58Z

Correct compile errors in tests originating from signature changes




---


Re: [DISCUSS] KIP-210: Provide for custom error handling when Kafka Streams fails to produce

2017-10-31 Thread Matt Farmer
I've opened this pull request to implement the KIP as currently written:
https://github.com/apache/kafka/pull/4165. It still needs some tests added,
but largely represents the shape I was going for.

If there are more points that folks would like to discuss, please let me
know. If I don't hear anything by tomorrow afternoon I'll probably start a
[VOTE] thread.

Thanks,
Matt

On Fri, Oct 27, 2017 at 7:33 PM Matt Farmer  wrote:

> I can’t think of a reason that would be problematic.
>
> Most of the time I would write a handler like this, I either want to
> ignore the error or fail and bring everything down so that I can spin it
> back up later and resume from earlier offsets. When we start up after
> crashing we’ll eventually try to process the message we failed to produce
> again.
>
> I’m concerned that “putting in a queue for later” opens you up to putting
> messages into the destination topic in an unexpected order. However if
> others feel differently, I’m happy to talk about it.
>
> On Fri, Oct 27, 2017 at 7:10 PM Guozhang Wang  wrote:
>
>> > Please correct me if I'm wrong, but my understanding is that the record
>> > metadata is always null if an exception occurred while trying to
>> produce.
>>
>> That is right. Thanks.
>>
>> I looked at the example code, and one thing I realized that since we are
>> not passing the context in the handle function, we may not be implement
>> the
>> logic to send the fail records into another queue for future processing.
>> Would people think that would be a big issue?
>>
>>
>> Guozhang
>>
>>
>> On Thu, Oct 26, 2017 at 12:14 PM, Matt Farmer  wrote:
>>
>> > Hello all,
>> >
>> > I've updated the KIP based on this conversation, and made it so that its
>> > interface, config setting, and parameters line up more closely with the
>> > interface in KIP-161 (deserialization handler).
>> >
>> > I believe there are a few specific questions I need to reply to.
>> >
>> > > The question I had about then handle parameters are around the record,
>> > > should it be `ProducerRecord`, or be generics of
>> > > `ProducerRecord` or `ProducerRecord> extends
>> > > Object, ? extends Object>`?
>> >
>> > At this point in the code we're guaranteed that this is a
>> > ProducerRecord, so the generics would just make it
>> harder
>> > to work with the key and value.
>> >
>> > > Also, should the handle function include the `RecordMetadata` as well
>> in
>> > > case it is not null?
>> >
>> > Please correct me if I'm wrong, but my understanding is that the record
>> > metadata is always null if an exception occurred while trying to
>> produce.
>> >
>> > > We may probably try to write down at least the following handling
>> logic
>> > and
>> > > see if the given API is sufficient for it
>> >
>> > I've added some examples to the KIP. Let me know what you think.
>> >
>> > Cheers,
>> > Matt
>> >
>> > On Mon, Oct 23, 2017 at 9:00 PM Matt Farmer  wrote:
>> >
>> > > Thanks for this feedback. I’m at a conference right now and am
>> planning
>> > on
>> > > updating the KIP again with details from this conversation later this
>> > week.
>> > >
>> > > I’ll shoot you a more detailed response then! :)
>> > > On Mon, Oct 23, 2017 at 8:16 PM Guozhang Wang 
>> > wrote:
>> > >
>> > >> Thanks for the KIP Matt.
>> > >>
>> > >> Regarding the handle interface of ProductionExceptionHandlerResponse,
>> > >> could
>> > >> you write it on the wiki also, along with the actual added config
>> names
>> > >> (e.g. what
>> > >>
>> > >>
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-161%3A+streams+
>> > deserialization+exception+handlers
>> > >> described).
>> > >>
>> > >> The question I had about then handle parameters are around the
>> record,
>> > >> should it be `ProducerRecord`, or be generics of
>> > >> `ProducerRecord` or `ProducerRecord> extends
>> > >> Object, ? extends Object>`?
>> > >>
>> > >> Also, should the handle function include the `RecordMetadata` as
>> well in
>> > >> case it is not null?
>> > >>
>> > >> We may probably try to write down at least the following handling
>> logic
>> > >> and
>> > >> see if the given API is sufficient for it: 1) throw exception
>> > immediately
>> > >> to fail fast and stop the world, 2) log the error and drop record and
>> > >> proceed silently, 3) send such errors to a specific "error" Kafka
>> topic,
>> > >> or
>> > >> record it as an app-level metrics (
>> > >> https://kafka.apache.org/documentation/#kafka_streams_monitoring)
>> for
>> > >> monitoring.
>> > >>
>> > >> Guozhang
>> > >>
>> > >>
>> > >>
>> > >> On Fri, Oct 20, 2017 at 5:47 PM, Matt Farmer  wrote:
>> > >>
>> > >> > I did some more digging tonight.
>> > >> >
>> > >> > @Ted: It looks like the deserialization handler uses
>> > >> > "default.deserialization.exception.handler" for the config name. No
>> > >> > ".class" on the end. I'm inclined to think this should use
>> > >> > "default.production.exception.handler".
>> > >> >
>> > >> > On Fri, Oct 20, 2017 at 8:22 PM Matt Farmer  wrote:
>> > >> >
>> > >> 

ZkUtils.getAllPartitions giving more partition counts.

2017-10-31 Thread satyajit vegesna
Hi,

Would like to understand the purpose of ZkUtils.getAllPartitions, as when i
try to use the method, i end up getting wrong number of partitions assigned
to Topics, i am not really sure if my understanding is wrong about this
method.
i have assumed this method would return the partition count.
But i end up getting Set[TopicAndPartition] objects, and when i try to sum
up the count of partitions, for a single topic, they exceed from the actual
topic count.

Regards,
Satyajit.


Re: [DISCUSS] KIP-215: Add topic regex support for Connect sinks

2017-10-31 Thread Jeff Klukas
I responded to Ewen's suggestions in the PR and went back to using
ConfigException.

If I don't hear any other concerns today, I'll start a [VOTE] thread for
the KIP.

On Mon, Oct 30, 2017 at 9:29 PM, Ewen Cheslack-Postava 
wrote:

> I took a quick pass at the PR, looks good so far. ConfigException would
> still be fine in the case you're highlighting as it's inside the framework
> anyway and we'd expect a ConfigException from configure() if connectors try
> to use their ConfigDef to parse an invalid config. But here I don't feel
> strongly about which to use since the error message is clear anyway and
> will just end up in logs / the REST API for the user to sort out.
>
> -Ewen
>
> On Fri, Oct 27, 2017 at 6:39 PM, Jeff Klukas  wrote:
>
> > I've updated the KIP to use the topics.regex name and opened a WIP PR
> with
> > an implementation that shows some additional complexity in how the
> > configuration option gets passed through, affecting various public
> function
> > signatures.
> >
> > I would appreciate any eyes on that for feedback on whether more design
> > discussion needs to happen in the KIP.
> >
> > https://github.com/apache/kafka/pull/4151
> >
> > On Fri, Oct 27, 2017 at 7:50 AM, Jeff Klukas  wrote:
> >
> > > I added a note in the KIP about ConfigException being thrown. I also
> > > changed the proposed default for the new config to empty string rather
> > than
> > > null.
> > >
> > > Absent a clear definition of what "common" regex syntax is, it seems an
> > > undue burden to ask the user to guess at what Pattern features are
> safe.
> > If
> > > we do end up implementing a different regex style, I think it will be
> > > necessary to still support the Java Pattern style long-term as an
> option.
> > > If we want to use a different regex style as default down the road, we
> > > could require "power users" of Java Pattern to enable an additional
> > config
> > > option to maintain compatibility.
> > >
> > > One additional change I might make to the KIP is that 'topics.regex'
> > might
> > > be a better choice for config name than 'topics.pattern'. That would be
> > in
> > > keeping with RegexRouter that has a 'regex' configuration option rather
> > > than 'pattern'.
> > >
> > > On Thu, Oct 26, 2017 at 11:00 PM, Ewen Cheslack-Postava <
> > e...@confluent.io
> > > > wrote:
> > >
> > >> It's fine to be more detailed, but ConfigException is already implied
> > for
> > >> all other config issues as well.
> > >>
> > >> Default could be either null or just empty string. re: alternatives,
> if
> > >> you
> > >> wanted to be slightly more detailed (though still a bit vague) re:
> > >> supported syntax, you could just say that while Pattern is used, we
> only
> > >> guarantee support for common regular expression syntax. Not sure if
> > >> there's
> > >> a good way of defining what "common" syntax is.
> > >>
> > >> Otherwise LGTM, and thanks for helping fill in a longstanding gap!
> > >>
> > >> -Ewen
> > >>
> > >> On Thu, Oct 26, 2017 at 7:56 PM, Ted Yu  wrote:
> > >>
> > >> > bq. Users may specify only one of 'topics' or 'topics.pattern'.
> > >> >
> > >> > Can you fill in which exception would be thrown if both of them are
> > >> > specified
> > >> > ?
> > >> >
> > >> > Cheers
> > >> >
> > >> > On Thu, Oct 26, 2017 at 6:27 PM, Jeff Klukas 
> wrote:
> > >> >
> > >> > > Looking for feedback on
> > >> > >
> > >> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > >> > > 215%3A+Add+topic+regex+support+for+Connect+sinks
> > >> > >
> > >> >
> > >>
> > >
> > >
> >
>


Jenkins build is back to normal : kafka-trunk-jdk8 #2181

2017-10-31 Thread Apache Jenkins Server
See 




Re: [VOTE] 1.0.0 RC4

2017-10-31 Thread Jeff Chao
+1 (non-binding). We ran our usual performance and regression suite and
found no noticeable negative impacts.

- Jeff
Heroku

On Tue, Oct 31, 2017 at 8:54 AM, Ted Yu  wrote:

> +1 (non-binding)
>
> Verified signatures.
> Ran test suite.
>
> On Tue, Oct 31, 2017 at 8:53 AM, Manikumar 
> wrote:
>
> > +1 (non-binding). Verified quickstart, ran producer/consumer perf
> scripts,
> > streams quickstart
> > ran tests on src distribution.
> >
> > On Tue, Oct 31, 2017 at 8:42 PM, Ismael Juma  wrote:
> >
> > > +1 (binding) from me. Tested the quickstart with the source and binary
> > > (Scala 2.12) artifacts, ran the tests on the source artifact and
> verified
> > > some signatures and hashes on source and binary (Scala 2.11) artifacts.
> > >
> > > Thanks for running the release, Guozhang!
> > >
> > > Ismael
> > >
> > > On Fri, Oct 27, 2017 at 6:28 PM, Guozhang Wang 
> > wrote:
> > >
> > > > Hello Kafka users, developers and client-developers,
> > > >
> > > > This is the fifth candidate for release of Apache Kafka 1.0.0. The
> main
> > > PRs
> > > > that gets merged in after RC3 are the following:
> > > >
> > > > *https://github.com/apache/kafka/commit/def1a768a6301c14ad66
> 11358716ab
> > > > 03de04e76b
> > > >  11358716ab
> > > > 03de04e76b>*
> > > >
> > > > *https://github.com/apache/kafka/commit/b9fc0f2e6892062efa1f
> ff0c6f7bfc
> > > > 683c8ba7ab
> > > >  ff0c6f7bfc
> > > > 683c8ba7ab>*
> > > >
> > > > *https://github.com/apache/kafka/commit/a51fdcd2ee7efbd14857
> 448a2fb7ec
> > > > b71531e1f9
> > > >  448a2fb7ec
> > > > b71531e1f9>*
> > > >
> > > > *https://github.com/apache/kafka/commit/109a60c77a56d4afed48
> 8c3ba35dc8
> > > > 459fde15ce
> > > >  8c3ba35dc8
> > > > 459fde15ce>*
> > > >
> > > > It's worth noting that starting in this version we are using a
> > different
> > > > version protocol with three digits: *major.minor.bug-fix*
> > > >
> > > > Any and all testing is welcome, but the following areas are worth
> > > > highlighting:
> > > >
> > > > 1. Client developers should verify that their clients can
> > produce/consume
> > > > to/from 1.0.0 brokers (ideally with compressed and uncompressed
> data).
> > > > 2. Performance and stress testing. Heroku and LinkedIn have helped
> with
> > > > this in the past (and issues have been found and fixed).
> > > > 3. End users can verify that their apps work correctly with the new
> > > > release.
> > > >
> > > > This is a major version release of Apache Kafka. It includes 29 new
> > KIPs.
> > > > See the release notes and release plan
> > > > (*https://cwiki.apache.org/confluence/pages/viewpage.
> > > > action?pageId=71764913
> > > >  > > pageId=71764913
> > > > >*)
> > > > for more details. A few feature highlights:
> > > >
> > > > * Java 9 support with significantly faster TLS and CRC32C
> > implementations
> > > > * JBOD improvements: disk failure only disables failed disk but not
> the
> > > > broker (KIP-112/KIP-113 part I)
> > > > * Controller improvements: reduced logging change to greatly
> accelerate
> > > > admin request handling.
> > > > * Newly added metrics across all the modules (KIP-164, KIP-168,
> > KIP-187,
> > > > KIP-188, KIP-196)
> > > > * Kafka Streams API improvements (KIP-120 / 130 / 138 / 150 / 160 /
> > 161),
> > > > and drop compatibility "Evolving" annotations
> > > >
> > > > Release notes for the 1.0.0 release:
> > > > *http://home.apache.org/~guozhang/kafka-1.0.0-rc4/RELEASE_NOTES.html
> > > >  >*
> > > >
> > > >
> > > >
> > > > *** Please download, test and vote by Tuesday, October 31, 8pm PT
> > > >
> > > > Kafka's KEYS file containing PGP keys we use to sign the release:
> > > > http://kafka.apache.org/KEYS
> > > >
> > > > * Release artifacts to be voted upon (source and binary):
> > > > *http://home.apache.org/~guozhang/kafka-1.0.0-rc4/
> > > > *
> > > >
> > > > * Maven artifacts to be voted upon:
> > > > https://repository.apache.org/content/groups/staging/org/apa
> che/kafka/
> > > >
> > > > * Javadoc:
> > > > *http://home.apache.org/~guozhang/kafka-1.0.0-rc4/javadoc/
> > > > *
> > > >
> > > > * Tag to be voted upon (off 1.0 branch) is the 1.0.0-rc4 tag:
> > > >
> > > > https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=
> > > > d4a3919e408e444dde5db5a261c6f912cb8475a2
> > > >
> > > > * Documentation:
> > > > Note the documentation can't be pushed live due to changes that will
> > not
> > > go
> > > > live until the release. You can manually verify by downloading
> > > > http://home.apache.org/~guozhang/kafka-1.0.0-rc4/
> > > > kafka_2.11-

Re: [VOTE] 1.0.0 RC4

2017-10-31 Thread Jeff Chao
+1 (non-binding). We ran our usual performance and regression suite and
found no noticeable negative impacts.

- Jeff
Heroku

On Tue, Oct 31, 2017 at 8:54 AM, Ted Yu  wrote:

> +1 (non-binding)
>
> Verified signatures.
> Ran test suite.
>
> On Tue, Oct 31, 2017 at 8:53 AM, Manikumar 
> wrote:
>
> > +1 (non-binding). Verified quickstart, ran producer/consumer perf
> scripts,
> > streams quickstart
> > ran tests on src distribution.
> >
> > On Tue, Oct 31, 2017 at 8:42 PM, Ismael Juma  wrote:
> >
> > > +1 (binding) from me. Tested the quickstart with the source and binary
> > > (Scala 2.12) artifacts, ran the tests on the source artifact and
> verified
> > > some signatures and hashes on source and binary (Scala 2.11) artifacts.
> > >
> > > Thanks for running the release, Guozhang!
> > >
> > > Ismael
> > >
> > > On Fri, Oct 27, 2017 at 6:28 PM, Guozhang Wang 
> > wrote:
> > >
> > > > Hello Kafka users, developers and client-developers,
> > > >
> > > > This is the fifth candidate for release of Apache Kafka 1.0.0. The
> main
> > > PRs
> > > > that gets merged in after RC3 are the following:
> > > >
> > > > *https://github.com/apache/kafka/commit/
> def1a768a6301c14ad6611358716ab
> > > > 03de04e76b
> > > >  def1a768a6301c14ad6611358716ab
> > > > 03de04e76b>*
> > > >
> > > > *https://github.com/apache/kafka/commit/
> b9fc0f2e6892062efa1fff0c6f7bfc
> > > > 683c8ba7ab
> > > >  b9fc0f2e6892062efa1fff0c6f7bfc
> > > > 683c8ba7ab>*
> > > >
> > > > *https://github.com/apache/kafka/commit/
> a51fdcd2ee7efbd14857448a2fb7ec
> > > > b71531e1f9
> > > >  a51fdcd2ee7efbd14857448a2fb7ec
> > > > b71531e1f9>*
> > > >
> > > > *https://github.com/apache/kafka/commit/
> 109a60c77a56d4afed488c3ba35dc8
> > > > 459fde15ce
> > > >  109a60c77a56d4afed488c3ba35dc8
> > > > 459fde15ce>*
> > > >
> > > > It's worth noting that starting in this version we are using a
> > different
> > > > version protocol with three digits: *major.minor.bug-fix*
> > > >
> > > > Any and all testing is welcome, but the following areas are worth
> > > > highlighting:
> > > >
> > > > 1. Client developers should verify that their clients can
> > produce/consume
> > > > to/from 1.0.0 brokers (ideally with compressed and uncompressed
> data).
> > > > 2. Performance and stress testing. Heroku and LinkedIn have helped
> with
> > > > this in the past (and issues have been found and fixed).
> > > > 3. End users can verify that their apps work correctly with the new
> > > > release.
> > > >
> > > > This is a major version release of Apache Kafka. It includes 29 new
> > KIPs.
> > > > See the release notes and release plan
> > > > (*https://cwiki.apache.org/confluence/pages/viewpage.
> > > > action?pageId=71764913
> > > >  > > pageId=71764913
> > > > >*)
> > > > for more details. A few feature highlights:
> > > >
> > > > * Java 9 support with significantly faster TLS and CRC32C
> > implementations
> > > > * JBOD improvements: disk failure only disables failed disk but not
> the
> > > > broker (KIP-112/KIP-113 part I)
> > > > * Controller improvements: reduced logging change to greatly
> accelerate
> > > > admin request handling.
> > > > * Newly added metrics across all the modules (KIP-164, KIP-168,
> > KIP-187,
> > > > KIP-188, KIP-196)
> > > > * Kafka Streams API improvements (KIP-120 / 130 / 138 / 150 / 160 /
> > 161),
> > > > and drop compatibility "Evolving" annotations
> > > >
> > > > Release notes for the 1.0.0 release:
> > > > *http://home.apache.org/~guozhang/kafka-1.0.0-rc4/RELEASE_NOTES.html
> > > >  >*
> > > >
> > > >
> > > >
> > > > *** Please download, test and vote by Tuesday, October 31, 8pm PT
> > > >
> > > > Kafka's KEYS file containing PGP keys we use to sign the release:
> > > > http://kafka.apache.org/KEYS
> > > >
> > > > * Release artifacts to be voted upon (source and binary):
> > > > *http://home.apache.org/~guozhang/kafka-1.0.0-rc4/
> > > > *
> > > >
> > > > * Maven artifacts to be voted upon:
> > > > https://repository.apache.org/content/groups/staging/org/
> apache/kafka/
> > > >
> > > > * Javadoc:
> > > > *http://home.apache.org/~guozhang/kafka-1.0.0-rc4/javadoc/
> > > > *
> > > >
> > > > * Tag to be voted upon (off 1.0 branch) is the 1.0.0-rc4 tag:
> > > >
> > > > https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=
> > > > d4a3919e408e444dde5db5a261c6f912cb8475a2
> > > >
> > > > * Documentation:
> > > > Note the documentation can't be pushed live due to changes that will
> > not
> > > go
> > > > live until the release. You can manually verify by downloading
> > > > http://home.apache.org/~guozhang/kafka-1.0.0-rc4/
> > > > kafka_2.11-

Re: [VOTE] 1.0.0 RC4

2017-10-31 Thread Jeff Chao
+1 (non-binding). We ran our usual performance and regression suite and
found no noticeable negative impacts.

- Jeff
Heroku


On Tue, Oct 31, 2017 at 8:54 AM, Ted Yu  wrote:

> +1 (non-binding)
>
> Verified signatures.
> Ran test suite.
>
> On Tue, Oct 31, 2017 at 8:53 AM, Manikumar 
> wrote:
>
> > +1 (non-binding). Verified quickstart, ran producer/consumer perf
> scripts,
> > streams quickstart
> > ran tests on src distribution.
> >
> > On Tue, Oct 31, 2017 at 8:42 PM, Ismael Juma  wrote:
> >
> > > +1 (binding) from me. Tested the quickstart with the source and binary
> > > (Scala 2.12) artifacts, ran the tests on the source artifact and
> verified
> > > some signatures and hashes on source and binary (Scala 2.11) artifacts.
> > >
> > > Thanks for running the release, Guozhang!
> > >
> > > Ismael
> > >
> > > On Fri, Oct 27, 2017 at 6:28 PM, Guozhang Wang 
> > wrote:
> > >
> > > > Hello Kafka users, developers and client-developers,
> > > >
> > > > This is the fifth candidate for release of Apache Kafka 1.0.0. The
> main
> > > PRs
> > > > that gets merged in after RC3 are the following:
> > > >
> > > > *https://github.com/apache/kafka/commit/def1a768a6301c14ad66
> 11358716ab
> > > > 03de04e76b
> > > >  11358716ab
> > > > 03de04e76b>*
> > > >
> > > > *https://github.com/apache/kafka/commit/b9fc0f2e6892062efa1f
> ff0c6f7bfc
> > > > 683c8ba7ab
> > > >  ff0c6f7bfc
> > > > 683c8ba7ab>*
> > > >
> > > > *https://github.com/apache/kafka/commit/a51fdcd2ee7efbd14857
> 448a2fb7ec
> > > > b71531e1f9
> > > >  448a2fb7ec
> > > > b71531e1f9>*
> > > >
> > > > *https://github.com/apache/kafka/commit/109a60c77a56d4afed48
> 8c3ba35dc8
> > > > 459fde15ce
> > > >  8c3ba35dc8
> > > > 459fde15ce>*
> > > >
> > > > It's worth noting that starting in this version we are using a
> > different
> > > > version protocol with three digits: *major.minor.bug-fix*
> > > >
> > > > Any and all testing is welcome, but the following areas are worth
> > > > highlighting:
> > > >
> > > > 1. Client developers should verify that their clients can
> > produce/consume
> > > > to/from 1.0.0 brokers (ideally with compressed and uncompressed
> data).
> > > > 2. Performance and stress testing. Heroku and LinkedIn have helped
> with
> > > > this in the past (and issues have been found and fixed).
> > > > 3. End users can verify that their apps work correctly with the new
> > > > release.
> > > >
> > > > This is a major version release of Apache Kafka. It includes 29 new
> > KIPs.
> > > > See the release notes and release plan
> > > > (*https://cwiki.apache.org/confluence/pages/viewpage.
> > > > action?pageId=71764913
> > > >  > > pageId=71764913
> > > > >*)
> > > > for more details. A few feature highlights:
> > > >
> > > > * Java 9 support with significantly faster TLS and CRC32C
> > implementations
> > > > * JBOD improvements: disk failure only disables failed disk but not
> the
> > > > broker (KIP-112/KIP-113 part I)
> > > > * Controller improvements: reduced logging change to greatly
> accelerate
> > > > admin request handling.
> > > > * Newly added metrics across all the modules (KIP-164, KIP-168,
> > KIP-187,
> > > > KIP-188, KIP-196)
> > > > * Kafka Streams API improvements (KIP-120 / 130 / 138 / 150 / 160 /
> > 161),
> > > > and drop compatibility "Evolving" annotations
> > > >
> > > > Release notes for the 1.0.0 release:
> > > > *http://home.apache.org/~guozhang/kafka-1.0.0-rc4/RELEASE_NOTES.html
> > > >  >*
> > > >
> > > >
> > > >
> > > > *** Please download, test and vote by Tuesday, October 31, 8pm PT
> > > >
> > > > Kafka's KEYS file containing PGP keys we use to sign the release:
> > > > http://kafka.apache.org/KEYS
> > > >
> > > > * Release artifacts to be voted upon (source and binary):
> > > > *http://home.apache.org/~guozhang/kafka-1.0.0-rc4/
> > > > *
> > > >
> > > > * Maven artifacts to be voted upon:
> > > > https://repository.apache.org/content/groups/staging/org/apa
> che/kafka/
> > > >
> > > > * Javadoc:
> > > > *http://home.apache.org/~guozhang/kafka-1.0.0-rc4/javadoc/
> > > > *
> > > >
> > > > * Tag to be voted upon (off 1.0 branch) is the 1.0.0-rc4 tag:
> > > >
> > > > https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=
> > > > d4a3919e408e444dde5db5a261c6f912cb8475a2
> > > >
> > > > * Documentation:
> > > > Note the documentation can't be pushed live due to changes that will
> > not
> > > go
> > > > live until the release. You can manually verify by downloading
> > > > http://home.apache.org/~guozhang/kafka-1.0.0-rc4/
> > > > kafka_2.11

[GitHub] kafka pull request #4164: Adding Trivago logo

2017-10-31 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4164


---


Build failed in Jenkins: kafka-trunk-jdk9 #163

2017-10-31 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] MINOR: Adding Trivago logo to Streams landing page

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H25 (couchdbtest ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 51787027159f6f206df928a5c8bd2a18bacd3d5c 
(refs/remotes/origin/trunk)
Commit message: "MINOR: Adding Trivago logo to Streams landing page"
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 51787027159f6f206df928a5c8bd2a18bacd3d5c
 > git rev-list 3c9e30a2f71c83b7efd45a65ffb5df5a80f48d19 # timeout=10
Setting 
GRADLE_3_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_3.4-rc-2
[kafka-trunk-jdk9] $ /bin/bash -xe /tmp/jenkins8045971617999606823.sh
+ rm -rf 
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_3.4-rc-2/bin/gradle

FAILURE: Build failed with an exception.

* What went wrong:
Could not determine java version from '9.0.1'.

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_3_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_3.4-rc-2
ERROR: Step ?Publish JUnit test result report? failed: Test reports were found 
but none of them are new. Did tests run? 
For example, 

 is 4 days 5 hr old

Setting 
GRADLE_3_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_3.4-rc-2
Not sending mail to unregistered user wangg...@gmail.com


Re: [DISCUSS] KIP-204 : adding records deletion operation to the new Admin Client API

2017-10-31 Thread Colin McCabe
Hi Paolo,

This looks like a good proposal.  I think it's probably ready to take it
to a vote soon?

Also, in the "Compatibility, Deprecation, and Migration Plan" section,
you might want to mention the internal Scala interface for doing this
which was added in KIP-107.  We should expect users to migrate from that
interface to the new one over time.

best,
Colin


On Wed, Oct 25, 2017, at 03:47, Paolo Patierno wrote:
> Thanks for all your feedback guys. I have updated my current code as
> well.
> 
> I know that the vote for this KIP is not started yet (actually I opened
> it due to no feedback on this KIP after a while but then the discussion
> started and it was really useful !) but I have already opened a PR for
> that.
> 
> Maybe feedback could be useful on that as well :
> 
> 
> https://github.com/apache/kafka/pull/4132
> 
> 
> Thanks
> 
> 
> Paolo Patierno
> Senior Software Engineer (IoT) @ Red Hat
> Microsoft MVP on Azure & IoT
> Microsoft Azure Advisor
> 
> Twitter : @ppatierno
> Linkedin : paolopatierno
> Blog : DevExperience
> 
> 
> 
> From: Colin McCabe 
> Sent: Monday, October 23, 2017 4:34 PM
> To: dev@kafka.apache.org
> Subject: Re: [DISCUSS] KIP-204 : adding records deletion operation to the
> new Admin Client API
> 
> On Mon, Oct 23, 2017, at 01:37, Tom Bentley wrote:
> > At the risk of muddying the waters further, have you considered
> > "RecordsToDelete" as the name of the class? It's both shorter and more
> > descriptive imho.
> 
> +1 for RecordsToDelete
> 
> >
> > Also "deleteBefore()" as the factory method name isn't very future proof
> > if
> > we came to support time-based deletion. Something like "beforeOffset()"
> > would be clearer, imho.
> 
> Great idea.
> 
> best,
> Colin
> 
> >
> > Putting these together: RecordsToDelete.beforeOffset() seems much clearer
> > to me than DeleteRecordsTarget.deleteBefore()
> >
> >
> > On 23 October 2017 at 08:45, Paolo Patierno  wrote:
> >
> > > About the name I just started to have a doubt about DeletetionTarget
> > > because it could be bounded to any deletion operation (i.e. delete topic,
> > > ...) and not just what we want now, so records deletion.
> > >
> > > I have updated the KIP-204 using DeleteRecordsTarget so it's clear that
> > > it's related to the delete records operation and what it means, so the
> > > target for such operation.
> > >
> > >
> > > Paolo Patierno
> > > Senior Software Engineer (IoT) @ Red Hat
> > > Microsoft MVP on Azure & IoT
> > > Microsoft Azure Advisor
> > >
> > > Twitter : @ppatierno
> > > Linkedin : paolopatierno
> > > Blog : DevExperience
> > >
> > >
> > > 
> > > From: Paolo Patierno 
> > > Sent: Monday, October 23, 2017 7:38 AM
> > > To: dev@kafka.apache.org
> > > Subject: Re: [DISCUSS] KIP-204 : adding records deletion operation to the
> > > new Admin Client API
> > >
> > > Hi Colin,
> > >
> > > I was using the long primitive in the code but not updated the KIP yet,
> > > sorry ... now it's updated !
> > >
> > > At same time I agree on using DeletionTarget ... KIP updated !
> > >
> > >
> > > Regarding the deleteBefore factory method, it's a pattern already used
> > > witn NewPartitions.increaseTo which I think it's really clear and give us
> > > more possibility to evolve this DeletionTarget class if we'll add 
> > > different
> > > ways to specify such target not only offset based.
> > >
> > >
> > > Thanks,
> > >
> > >
> > > Paolo Patierno
> > > Senior Software Engineer (IoT) @ Red Hat
> > > Microsoft MVP on Azure & IoT
> > > Microsoft Azure Advisor
> > >
> > > Twitter : @ppatierno
> > > Linkedin : paolopatierno
> > > Blog : DevExperience
> > >
> > >
> > > 
> > > From: Colin McCabe 
> > > Sent: Friday, October 20, 2017 8:18 PM
> > > To: dev@kafka.apache.org
> > > Subject: Re: [DISCUSS] KIP-204 : adding records deletion operation to the
> > > new Admin Client API
> > >
> > > > /** Describe records to delete */
> > >  > public class DeleteRecords {
> > >  > private Long offset;
> > >
> > > "DeleteRecords" doesn't really explain what the class is, though.  How
> > > about "DeletionTarget"?  Also, why do we need a Long object rather than
> > > a long primitive?
> > >
> > >  >
> > >  > /**
> > >  > * Delete all the records before the given {@code offset}
> > >  > */
> > >  > public static DeleteRecords deleteBefore(Long offset) { ... }
> > >
> > > This seems confusing to me.  What's wrong with a regular constructor for
> > > DeletionTarget?
> > >
> > > best,
> > > Colin
> > >
> > >
> > > On Fri, Oct 20, 2017, at 01:28, Paolo Patierno wrote:
> > > > Hi all,
> > > >
> > > >
> > > > I h

Re: [DISCUSS] KIP-204 : adding records deletion operation to the new Admin Client API

2017-10-31 Thread Paolo Patierno
Hi Colin,
thanks !

This morning (Italy time zone) I started the vote for this KIP. Up to now there 
are 5 non-binding votes.

In any case, I'll update the section you mentioned. I totally agree with you on 
giving more info to developers who are using the Scala API.

Thanks
Paolo

From: Colin McCabe 
Sent: Tuesday, October 31, 2017 9:49:59 PM
To: dev@kafka.apache.org
Subject: Re: [DISCUSS] KIP-204 : adding records deletion operation to the new 
Admin Client API

Hi Paolo,

This looks like a good proposal.  I think it's probably ready to take it
to a vote soon?

Also, in the "Compatibility, Deprecation, and Migration Plan" section,
you might want to mention the internal Scala interface for doing this
which was added in KIP-107.  We should expect users to migrate from that
interface to the new one over time.

best,
Colin


On Wed, Oct 25, 2017, at 03:47, Paolo Patierno wrote:
> Thanks for all your feedback guys. I have updated my current code as
> well.
>
> I know that the vote for this KIP is not started yet (actually I opened
> it due to no feedback on this KIP after a while but then the discussion
> started and it was really useful !) but I have already opened a PR for
> that.
>
> Maybe feedback could be useful on that as well :
>
>
> https://github.com/apache/kafka/pull/4132
>
>
> Thanks
>
>
> Paolo Patierno
> Senior Software Engineer (IoT) @ Red Hat
> Microsoft MVP on Azure & IoT
> Microsoft Azure Advisor
>
> Twitter : @ppatierno
> Linkedin : paolopatierno
> Blog : DevExperience
>
>
> 
> From: Colin McCabe 
> Sent: Monday, October 23, 2017 4:34 PM
> To: dev@kafka.apache.org
> Subject: Re: [DISCUSS] KIP-204 : adding records deletion operation to the
> new Admin Client API
>
> On Mon, Oct 23, 2017, at 01:37, Tom Bentley wrote:
> > At the risk of muddying the waters further, have you considered
> > "RecordsToDelete" as the name of the class? It's both shorter and more
> > descriptive imho.
>
> +1 for RecordsToDelete
>
> >
> > Also "deleteBefore()" as the factory method name isn't very future proof
> > if
> > we came to support time-based deletion. Something like "beforeOffset()"
> > would be clearer, imho.
>
> Great idea.
>
> best,
> Colin
>
> >
> > Putting these together: RecordsToDelete.beforeOffset() seems much clearer
> > to me than DeleteRecordsTarget.deleteBefore()
> >
> >
> > On 23 October 2017 at 08:45, Paolo Patierno  wrote:
> >
> > > About the name I just started to have a doubt about DeletetionTarget
> > > because it could be bounded to any deletion operation (i.e. delete topic,
> > > ...) and not just what we want now, so records deletion.
> > >
> > > I have updated the KIP-204 using DeleteRecordsTarget so it's clear that
> > > it's related to the delete records operation and what it means, so the
> > > target for such operation.
> > >
> > >
> > > Paolo Patierno
> > > Senior Software Engineer (IoT) @ Red Hat
> > > Microsoft MVP on Azure & IoT
> > > Microsoft Azure Advisor
> > >
> > > Twitter : @ppatierno
> > > Linkedin : paolopatierno
> > > Blog : DevExperience
> > >
> > >
> > > 
> > > From: Paolo Patierno 
> > > Sent: Monday, October 23, 2017 7:38 AM
> > > To: dev@kafka.apache.org
> > > Subject: Re: [DISCUSS] KIP-204 : adding records deletion operation to the
> > > new Admin Client API
> > >
> > > Hi Colin,
> > >
> > > I was using the long primitive in the code but not updated the KIP yet,
> > > sorry ... now it's updated !
> > >
> > > At same time I agree on using DeletionTarget ... KIP updated !
> > >
> > >
> > > Regarding the deleteBefore factory method, it's a pattern already used
> > > witn NewPartitions.increaseTo which I think it's really clear and give us
> > > more possibility to evolve this DeletionTarget class if we'll add 
> > > different
> > > ways to specify such target not only offset based.
> > >
> > >
> > > Thanks,
> > >
> > >
> > > Paolo Patierno
> > > Senior Software Engineer (IoT) @ Red Hat
> > > Microsoft MVP on Azure & IoT
> > > Microsoft Azure Advisor
> > >
> > > Twitter : @ppatierno
> > > Linkedin : paolopatierno
> > > Blog : DevExperience
> > >
> > >
> > > 
> > > From: Colin McCabe 
> > > Sent: Friday, October 20, 2017 8:18 PM
> > > To: dev@kafka.apache.org
> > > Subject: Re: [DISCUSS] KIP-204 : adding records deletion operation to the
> > > new Admin Client API
> > >
> > > > /** Describe records to delete */
> > >  > public class DeleteRecords {
> > >  > private Long offset;
> > >
> > > "DeleteRecords" doesn't really explain what the class is, though.  How
> > > about "DeletionTarget"?  Also, why do we need a Long objec

Re: Metadata class doesn't "expose" topics with errors

2017-10-31 Thread Guozhang Wang
Hello Paolo,

I'm looking at your PR for KIP-204 now. Will reply on the discussion thread
/ PR diff file directly if I find anything.


Guozhang

On Tue, Oct 24, 2017 at 5:45 AM, Paolo Patierno  wrote:

> Hi Guozhang,
>
> thanks for replying !
>
>
> I see your point about the Metadata class which doesn't need to expose
> errors because transient.
>
>
> Regarding the KIP-204, the delete operations in the "legacy" client
> doesn't have any retry logic but it just returns the error to the user
> which should retry himself (on topics where the operation failed).
>
> If I should add a retry logic in the "new" admin client, considering a
> delete records operation on more topics partitions at same time, I should
> retry if at least one of the topics partitions will come with a
> LEADER_NOT_AVAILABLE (after metadata request), without going on with other
> topic partitions which have leaders.
>
> Maybe it's better to continue with the operations on such topics and come
> back to the user with a LEADER_NOT_AVAILABLE for the others (it's the
> current behaviour with "legacy" admin client).
>
>
> For now the current implementation I have (I'll push a PR soon), use the
> Call class for sending a MetadataRequest and then its handleResponse for
> using another Call instance for sending the DeleteRecordsRequest.
>
>
> Thanks
>
>
> Paolo Patierno
> Senior Software Engineer (IoT) @ Red Hat
> Microsoft MVP on Azure & IoT
> Microsoft Azure Advisor
>
> Twitter : @ppatierno
> Linkedin : paolopatierno
> Blog : DevExperience
>
>
> 
> From: Guozhang Wang 
> Sent: Tuesday, October 24, 2017 12:52 AM
> To: dev@kafka.apache.org
> Subject: Re: Metadata class doesn't "expose" topics with errors
>
> Hello Paolo,
>
> The reason we filtered the errors in the topics in the generated Cluster is
> that Metadata and its "fetch()" returned Cluster is a common class that is
> used among all clients (producer, consumer, connect, streams, admin), and
> is treated as a high-level representation of the current snapshot of the
> hosted topic information of the cluster, and hence we intentionally exclude
> any transient errors in the representation to abstract such issues away
> from its users.
>
> As for your implementation on KIP-204, I think just wait-and-retry for the
> updated metadata.fetch() Cluster contain the leader information for the
> topic is fine: since if a LEADER_NOT_AVAILABLE is returned you'll need to
> backoff and retry anyways, right?
>
>
> Guozhang
>
>
>
> On Mon, Oct 23, 2017 at 2:36 AM, Paolo Patierno 
> wrote:
>
> > Finally another plan could be to use nesting of runnable calls.
> >
> > The first one for asking metadata (using the MetadataRequest which
> > provides us all the errors) and then sending the delete records requests
> in
> > the handleResponse() of such metadata request.
> >
> >
> > Paolo Patierno
> > Senior Software Engineer (IoT) @ Red Hat
> > Microsoft MVP on Azure & IoT
> > Microsoft Azure Advisor
> >
> > Twitter : @ppatierno
> > Linkedin : paolopatierno
> > Blog : DevExperience
> >
> >
> > 
> > From: Paolo Patierno 
> > Sent: Monday, October 23, 2017 9:06 AM
> > To: dev@kafka.apache.org
> > Subject: Metadata class doesn't "expose" topics with errors
> >
> > Hi devs,
> >
> > while developing the KIP-204 (having delete records operation in the
> "new"
> > Admin Client) I'm facing with the following doubt (or maybe a lack of
> info)
> > ...
> >
> >
> > As described by KIP-107 (which implements this feature at protocol level
> > and in the "legacy" Admin Client), the request needs to be sent to the
> > leader.
> >
> >
> > For both KIPs, the operation has a Map (offset is
> > a long in the "legacy" API but it's becoming to be a class in the "new"
> > API) and in order to reduce the number of requests to different leaders,
> my
> > code groups partitions having same leader so having a Map > Map>.
> >
> >
> > In order to know the leaders I need to request metadata and there are two
> > ways for doing that :
> >
> >
> >   *   using something like the producer does with Metadata class, putting
> > the topics, request update and waiting for it
> >   *   using the low level MetadataRequest and handling the related
> > response (which is what the "legacy" API does today)
> >
> > I noticed that building the Cluster object from the MetadataResponse, the
> > topics with errors are skipped and it means that in the final "high
> level"
> > Metadata class (fetching the Cluster object) there is no information
> about
> > them. So with first solution we have no info about topics with errors
> > (maybe the only errors I'm able to handle is the "LEADER_NOT_AVAILABLE",
> if
> > leaderFor() on the Cluster returns a null Node).
> >
> > Is there any specific re

Re: Metadata class doesn't "expose" topics with errors

2017-10-31 Thread Paolo Patierno
Hi Guozhang,

thanks ! Really appreciated !
Yes I think that at this point, having an implementation proposal, it makes 
more sense to comment on the PR directly.

Thanks
Paolo

From: Guozhang Wang 
Sent: Tuesday, October 31, 2017 10:00:22 PM
To: dev@kafka.apache.org
Subject: Re: Metadata class doesn't "expose" topics with errors

Hello Paolo,

I'm looking at your PR for KIP-204 now. Will reply on the discussion thread
/ PR diff file directly if I find anything.


Guozhang

On Tue, Oct 24, 2017 at 5:45 AM, Paolo Patierno  wrote:

> Hi Guozhang,
>
> thanks for replying !
>
>
> I see your point about the Metadata class which doesn't need to expose
> errors because transient.
>
>
> Regarding the KIP-204, the delete operations in the "legacy" client
> doesn't have any retry logic but it just returns the error to the user
> which should retry himself (on topics where the operation failed).
>
> If I should add a retry logic in the "new" admin client, considering a
> delete records operation on more topics partitions at same time, I should
> retry if at least one of the topics partitions will come with a
> LEADER_NOT_AVAILABLE (after metadata request), without going on with other
> topic partitions which have leaders.
>
> Maybe it's better to continue with the operations on such topics and come
> back to the user with a LEADER_NOT_AVAILABLE for the others (it's the
> current behaviour with "legacy" admin client).
>
>
> For now the current implementation I have (I'll push a PR soon), use the
> Call class for sending a MetadataRequest and then its handleResponse for
> using another Call instance for sending the DeleteRecordsRequest.
>
>
> Thanks
>
>
> Paolo Patierno
> Senior Software Engineer (IoT) @ Red Hat
> Microsoft MVP on Azure & IoT
> Microsoft Azure Advisor
>
> Twitter : @ppatierno
> Linkedin : paolopatierno
> Blog : DevExperience
>
>
> 
> From: Guozhang Wang 
> Sent: Tuesday, October 24, 2017 12:52 AM
> To: dev@kafka.apache.org
> Subject: Re: Metadata class doesn't "expose" topics with errors
>
> Hello Paolo,
>
> The reason we filtered the errors in the topics in the generated Cluster is
> that Metadata and its "fetch()" returned Cluster is a common class that is
> used among all clients (producer, consumer, connect, streams, admin), and
> is treated as a high-level representation of the current snapshot of the
> hosted topic information of the cluster, and hence we intentionally exclude
> any transient errors in the representation to abstract such issues away
> from its users.
>
> As for your implementation on KIP-204, I think just wait-and-retry for the
> updated metadata.fetch() Cluster contain the leader information for the
> topic is fine: since if a LEADER_NOT_AVAILABLE is returned you'll need to
> backoff and retry anyways, right?
>
>
> Guozhang
>
>
>
> On Mon, Oct 23, 2017 at 2:36 AM, Paolo Patierno 
> wrote:
>
> > Finally another plan could be to use nesting of runnable calls.
> >
> > The first one for asking metadata (using the MetadataRequest which
> > provides us all the errors) and then sending the delete records requests
> in
> > the handleResponse() of such metadata request.
> >
> >
> > Paolo Patierno
> > Senior Software Engineer (IoT) @ Red Hat
> > Microsoft MVP on Azure & IoT
> > Microsoft Azure Advisor
> >
> > Twitter : @ppatierno
> > Linkedin : paolopatierno
> > Blog : DevExperience
> >
> >
> > 
> > From: Paolo Patierno 
> > Sent: Monday, October 23, 2017 9:06 AM
> > To: dev@kafka.apache.org
> > Subject: Metadata class doesn't "expose" topics with errors
> >
> > Hi devs,
> >
> > while developing the KIP-204 (having delete records operation in the
> "new"
> > Admin Client) I'm facing with the following doubt (or maybe a lack of
> info)
> > ...
> >
> >
> > As described by KIP-107 (which implements this feature at protocol level
> > and in the "legacy" Admin Client), the request needs to be sent to the
> > leader.
> >
> >
> > For both KIPs, the operation has a Map (offset is
> > a long in the "legacy" API but it's becoming to be a class in the "new"
> > API) and in order to reduce the number of requests to different leaders,
> my
> > code groups partitions having same leader so having a Map > Map>.
> >
> >
> > In order to know the leaders I need to request metadata and there are two
> > ways for doing that :
> >
> >
> >   *   using something like the producer does with Metadata class, putting
> > the topics, request update and waiting for it
> >   *   using the low level MetadataRequest and handling the related
> > response (which is what the "legacy" API does today)
> >
> > I noticed that building the Cluster object from the MetadataResponse, the
> > topics with errors are skipped and

Jenkins build is back to normal : kafka-1.0-jdk7 #65

2017-10-31 Thread Apache Jenkins Server
See 




Re: [VOTE] 1.0.0 RC4

2017-10-31 Thread Vahid S Hashemian
+1 (non-binding)

Built the jars and ran quickstart successfully on Ubuntu.

Thanks.
--Vahid




From:   Jeff Chao 
To: "dev@kafka.apache.org" 
Date:   10/31/2017 12:07 PM
Subject:Re: [VOTE] 1.0.0 RC4



+1 (non-binding). We ran our usual performance and regression suite and
found no noticeable negative impacts.

- Jeff
Heroku

On Tue, Oct 31, 2017 at 8:54 AM, Ted Yu  wrote:

> +1 (non-binding)
>
> Verified signatures.
> Ran test suite.
>
> On Tue, Oct 31, 2017 at 8:53 AM, Manikumar 
> wrote:
>
> > +1 (non-binding). Verified quickstart, ran producer/consumer perf
> scripts,
> > streams quickstart
> > ran tests on src distribution.
> >
> > On Tue, Oct 31, 2017 at 8:42 PM, Ismael Juma  
wrote:
> >
> > > +1 (binding) from me. Tested the quickstart with the source and 
binary
> > > (Scala 2.12) artifacts, ran the tests on the source artifact and
> verified
> > > some signatures and hashes on source and binary (Scala 2.11) 
artifacts.
> > >
> > > Thanks for running the release, Guozhang!
> > >
> > > Ismael
> > >
> > > On Fri, Oct 27, 2017 at 6:28 PM, Guozhang Wang 
> > wrote:
> > >
> > > > Hello Kafka users, developers and client-developers,
> > > >
> > > > This is the fifth candidate for release of Apache Kafka 1.0.0. The
> main
> > > PRs
> > > > that gets merged in after RC3 are the following:
> > > >
> > > > 
*https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_apache_kafka_commit_def1a768a6301c14ad66&d=DwIBaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc&m=1BE5YG6DypE4661uxdiTJKF9oqCCoP6l-BWlSAS-ElU&s=xOgir355tIC58NXQWnuDAn7hGxzgdiemfU7Uxmnibto&e=
> 11358716ab
> > > > 03de04e76b
> > > > <
https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_apache_kafka_commit_def1a768a6301c14ad66&d=DwIBaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc&m=1BE5YG6DypE4661uxdiTJKF9oqCCoP6l-BWlSAS-ElU&s=xOgir355tIC58NXQWnuDAn7hGxzgdiemfU7Uxmnibto&e=

> 11358716ab
> > > > 03de04e76b>*
> > > >
> > > > 
*https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_apache_kafka_commit_b9fc0f2e6892062efa1f&d=DwIBaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc&m=1BE5YG6DypE4661uxdiTJKF9oqCCoP6l-BWlSAS-ElU&s=janyXsLHHLqXA6XgMr3bldn2kOWedyLpWaieFaR5YV4&e=
> ff0c6f7bfc
> > > > 683c8ba7ab
> > > > <
https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_apache_kafka_commit_b9fc0f2e6892062efa1f&d=DwIBaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc&m=1BE5YG6DypE4661uxdiTJKF9oqCCoP6l-BWlSAS-ElU&s=janyXsLHHLqXA6XgMr3bldn2kOWedyLpWaieFaR5YV4&e=

> ff0c6f7bfc
> > > > 683c8ba7ab>*
> > > >
> > > > 
*https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_apache_kafka_commit_a51fdcd2ee7efbd14857&d=DwIBaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc&m=1BE5YG6DypE4661uxdiTJKF9oqCCoP6l-BWlSAS-ElU&s=iGV19tjqflDvmMiWHoM8UpWyuiE9UvhAQzUSInI54PY&e=
> 448a2fb7ec
> > > > b71531e1f9
> > > > <
https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_apache_kafka_commit_a51fdcd2ee7efbd14857&d=DwIBaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc&m=1BE5YG6DypE4661uxdiTJKF9oqCCoP6l-BWlSAS-ElU&s=iGV19tjqflDvmMiWHoM8UpWyuiE9UvhAQzUSInI54PY&e=

> 448a2fb7ec
> > > > b71531e1f9>*
> > > >
> > > > 
*https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_apache_kafka_commit_109a60c77a56d4afed48&d=DwIBaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc&m=1BE5YG6DypE4661uxdiTJKF9oqCCoP6l-BWlSAS-ElU&s=tPfBxkSD030fqmigsmDiJZ_OcM1_FHtsBWCyu0AAsCo&e=
> 8c3ba35dc8
> > > > 459fde15ce
> > > > <
https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_apache_kafka_commit_109a60c77a56d4afed48&d=DwIBaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc&m=1BE5YG6DypE4661uxdiTJKF9oqCCoP6l-BWlSAS-ElU&s=tPfBxkSD030fqmigsmDiJZ_OcM1_FHtsBWCyu0AAsCo&e=

> 8c3ba35dc8
> > > > 459fde15ce>*
> > > >
> > > > It's worth noting that starting in this version we are using a
> > different
> > > > version protocol with three digits: *major.minor.bug-fix*
> > > >
> > > > Any and all testing is welcome, but the following areas are worth
> > > > highlighting:
> > > >
> > > > 1. Client developers should verify that their clients can
> > produce/consume
> > > > to/from 1.0.0 brokers (ideally with compressed and uncompressed
> data).
> > > > 2. Performance and stress testing. Heroku and LinkedIn have helped
> with
> > > > this in the past (and issues have been found and fixed).
> > > > 3. End users can verify that their apps work correctly with the 
new
> > > > release.
> > > >
> > > > This is a major version release of Apache Kafka. It includes 29 
new
> > KIPs.
> > > > See the release notes and release plan
> > > > 
(*https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.apache.org_confluence_pages_viewpage&d=DwIBaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc&m=1BE5YG6DypE4661uxdiTJKF9oqCCoP6l-BWlSAS-ElU&s

Build failed in Jenkins: kafka-trunk-jdk7 #2938

2017-10-31 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] MINOR: Adding Trivago logo to Streams landing page

--
[...truncated 1.84 MB...]
org.apache.kafka.streams.KafkaStreamsTest > testStateGlobalThreadClose PASSED

org.apache.kafka.streams.KafkaStreamsTest > 
shouldThrowExceptionSettingUncaughtExceptionHandlerNotInCreateState STARTED

org.apache.kafka.streams.KafkaStreamsTest > 
shouldThrowExceptionSettingUncaughtExceptionHandlerNotInCreateState PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCleanup STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCleanup PASSED

org.apache.kafka.streams.KafkaStreamsTest > 
shouldThrowExceptionSettingStateListenerNotInCreateState STARTED

org.apache.kafka.streams.KafkaStreamsTest > 
shouldThrowExceptionSettingStateListenerNotInCreateState PASSED

org.apache.kafka.streams.KafkaStreamsTest > testNumberDefaultMetrics STARTED

org.apache.kafka.streams.KafkaStreamsTest > testNumberDefaultMetrics PASSED

org.apache.kafka.streams.KafkaStreamsTest > shouldReturnThreadMetadata STARTED

org.apache.kafka.streams.KafkaStreamsTest > shouldReturnThreadMetadata PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCloseIsIdempotent STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCloseIsIdempotent PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCannotCleanupWhileRunning 
STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCannotCleanupWhileRunning PASSED

org.apache.kafka.streams.KafkaStreamsTest > testStateThreadClose STARTED

org.apache.kafka.streams.KafkaStreamsTest > testStateThreadClose PASSED

org.apache.kafka.streams.KafkaStreamsTest > testStateChanges STARTED

org.apache.kafka.streams.KafkaStreamsTest > testStateChanges PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCannotStartTwice STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCannotStartTwice PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfProducerEnableIdempotenceIsOverriddenIfEosEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfProducerEnableIdempotenceIsOverriddenIfEosEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > shouldUseNewConfigsWhenPresent 
STARTED

org.apache.kafka.streams.StreamsConfigTest > shouldUseNewConfigsWhenPresent 
PASSED

org.apache.kafka.streams.StreamsConfigTest > testGetConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > testGetConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > shouldAcceptAtLeastOnce STARTED

org.apache.kafka.streams.StreamsConfigTest > shouldAcceptAtLeastOnce PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldUseCorrectDefaultsWhenNoneSpecified STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldUseCorrectDefaultsWhenNoneSpecified PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerEnableIdempotenceIfEosDisabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerEnableIdempotenceIfEosDisabled PASSED

org.apache.kafka.streams.StreamsConfigTest > defaultSerdeShouldBeConfigured 
STARTED

org.apache.kafka.streams.StreamsConfigTest > defaultSerdeShouldBeConfigured 
PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSetDifferentDefaultsIfEosEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSetDifferentDefaultsIfEosEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldNotOverrideUserConfigRetriesIfExactlyOnceEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldNotOverrideUserConfigRetriesIfExactlyOnceEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > testGetProducerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > testGetProducerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerMaxInFlightRequestPerConnectionsWhenEosDisabled 
STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerMaxInFlightRequestPerConnectionsWhenEosDisabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowStreamsExceptionIfValueSerdeConfigFails STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowStreamsExceptionIfValueSerdeConfigFails PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfConsumerAutoCommitIsOverridden STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfConsumerAutoCommitIsOverridden PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowExceptionIfNotAtLestOnceOrExactlyO

Re: [DISCUSS] KIP-217: Expose a timeout to allow an expired ZK session to be re-created

2017-10-31 Thread Stephane Maarek
Hi Jun,

Thanks for the reply.

1) The reason I'm asking about it is I wonder if it's not worth focusing the 
development efforts on taking ownership of the existing PR 
(https://github.com/apache/zookeeper/pull/150)  to fix ZOOKEEPER-2184, rebase 
it and have it merged into the ZK codebase shortly.  I feel this KIP might 
introduce a setting that could be deprecated shortly and confuse the end user a 
bit further with one more knob to turn.

3) I'm not sure if I fully understand, sorry for the beginner's question: if 
the default timeout is infinite, then it won't change anything to how Kafka 
works from today, does it? (unless I'm missing something sorry). If not set to 
infinite, then we introduce the risk of a whole cluster shutting down at once?

Thanks,
Stephane

On 31/10/17, 1:00 pm, "Jun Rao"  wrote:

Hi, Stephane,

Thanks for the reply.

1) Fixing the issue in ZK will be ideal. Not sure when it will happen
though. Once it's fixed, we can probably deprecate this config.

2) That could be useful. Is there a java api to do that at runtime? Also,
invalidating DNS cache doesn't always fix the issue of unresolved host. In
some of the cases, human intervention is needed.

3) The default timeout is infinite though.

Jun


On Sat, Oct 28, 2017 at 11:48 PM, Stephane Maarek <
steph...@simplemachines.com.au> wrote:

> Hi Jun,
>
> I think this is very helpful. Restarting Kafka brokers in case of 
zookeeper
> host change is not a well known operation.
>
> Few questions:
> 1) would it not be worth fixing the problem at the source ? This has been
> stuck for a while though, maybe a little push would help :
> https://issues.apache.org/jira/plugins/servlet/mobile#issue/ZOOKEEPER-2184
>
> 2) upon recreating the zookeeper object , is it not possible to invalidate
> the DNS cache so that it resolves the new hostname?
>
> 3) could the cluster be down in this situation: one migrates an entire
> zookeeper cluster to new machines (one by one). The quorum is still alive
> without downtime, but now every broker in a cluster can't resolve 
zookeeper
> at the same time. They all shut down at the same time after the new
> time-out setting.
>
> Thanks !
> Stéphane
>
> On 28 Oct. 2017 9:42 am, "Jun Rao"  wrote:
>
> > Hi, Everyone,
> >
> > We created "KIP-217: Expose a timeout to allow an expired ZK session to
> be
> > re-created".
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 217%3A+Expose+a+timeout+to+allow+an+expired+ZK+session+to+be+re-created
> >
> > Please take a look and provide your feedback.
> >
> > Thanks,
> >
> > Jun
> >
>





Re: [DISCUSS] KIP-210: Provide for custom error handling when Kafka Streams fails to produce

2017-10-31 Thread Guozhang Wang
That sounds reasonable, thanks Matt.

As for the implementation, please note that there is another ongoing PR
that may touch the same classes that you are working on:
https://github.com/apache/kafka/pull/4148

So it may help if you can also take a look at that PR and see if it is
compatible with your changes.



Guozhang


On Tue, Oct 31, 2017 at 10:59 AM, Matt Farmer  wrote:

> I've opened this pull request to implement the KIP as currently written:
> https://github.com/apache/kafka/pull/4165. It still needs some tests
> added,
> but largely represents the shape I was going for.
>
> If there are more points that folks would like to discuss, please let me
> know. If I don't hear anything by tomorrow afternoon I'll probably start a
> [VOTE] thread.
>
> Thanks,
> Matt
>
> On Fri, Oct 27, 2017 at 7:33 PM Matt Farmer  wrote:
>
> > I can’t think of a reason that would be problematic.
> >
> > Most of the time I would write a handler like this, I either want to
> > ignore the error or fail and bring everything down so that I can spin it
> > back up later and resume from earlier offsets. When we start up after
> > crashing we’ll eventually try to process the message we failed to produce
> > again.
> >
> > I’m concerned that “putting in a queue for later” opens you up to putting
> > messages into the destination topic in an unexpected order. However if
> > others feel differently, I’m happy to talk about it.
> >
> > On Fri, Oct 27, 2017 at 7:10 PM Guozhang Wang 
> wrote:
> >
> >> > Please correct me if I'm wrong, but my understanding is that the
> record
> >> > metadata is always null if an exception occurred while trying to
> >> produce.
> >>
> >> That is right. Thanks.
> >>
> >> I looked at the example code, and one thing I realized that since we are
> >> not passing the context in the handle function, we may not be implement
> >> the
> >> logic to send the fail records into another queue for future processing.
> >> Would people think that would be a big issue?
> >>
> >>
> >> Guozhang
> >>
> >>
> >> On Thu, Oct 26, 2017 at 12:14 PM, Matt Farmer  wrote:
> >>
> >> > Hello all,
> >> >
> >> > I've updated the KIP based on this conversation, and made it so that
> its
> >> > interface, config setting, and parameters line up more closely with
> the
> >> > interface in KIP-161 (deserialization handler).
> >> >
> >> > I believe there are a few specific questions I need to reply to.
> >> >
> >> > > The question I had about then handle parameters are around the
> record,
> >> > > should it be `ProducerRecord`, or be generics of
> >> > > `ProducerRecord` or `ProducerRecord >> extends
> >> > > Object, ? extends Object>`?
> >> >
> >> > At this point in the code we're guaranteed that this is a
> >> > ProducerRecord, so the generics would just make it
> >> harder
> >> > to work with the key and value.
> >> >
> >> > > Also, should the handle function include the `RecordMetadata` as
> well
> >> in
> >> > > case it is not null?
> >> >
> >> > Please correct me if I'm wrong, but my understanding is that the
> record
> >> > metadata is always null if an exception occurred while trying to
> >> produce.
> >> >
> >> > > We may probably try to write down at least the following handling
> >> logic
> >> > and
> >> > > see if the given API is sufficient for it
> >> >
> >> > I've added some examples to the KIP. Let me know what you think.
> >> >
> >> > Cheers,
> >> > Matt
> >> >
> >> > On Mon, Oct 23, 2017 at 9:00 PM Matt Farmer  wrote:
> >> >
> >> > > Thanks for this feedback. I’m at a conference right now and am
> >> planning
> >> > on
> >> > > updating the KIP again with details from this conversation later
> this
> >> > week.
> >> > >
> >> > > I’ll shoot you a more detailed response then! :)
> >> > > On Mon, Oct 23, 2017 at 8:16 PM Guozhang Wang 
> >> > wrote:
> >> > >
> >> > >> Thanks for the KIP Matt.
> >> > >>
> >> > >> Regarding the handle interface of ProductionExceptionHandlerResp
> onse,
> >> > >> could
> >> > >> you write it on the wiki also, along with the actual added config
> >> names
> >> > >> (e.g. what
> >> > >>
> >> > >>
> >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-161%3A+streams+
> >> > deserialization+exception+handlers
> >> > >> described).
> >> > >>
> >> > >> The question I had about then handle parameters are around the
> >> record,
> >> > >> should it be `ProducerRecord`, or be generics of
> >> > >> `ProducerRecord` or `ProducerRecord >> extends
> >> > >> Object, ? extends Object>`?
> >> > >>
> >> > >> Also, should the handle function include the `RecordMetadata` as
> >> well in
> >> > >> case it is not null?
> >> > >>
> >> > >> We may probably try to write down at least the following handling
> >> logic
> >> > >> and
> >> > >> see if the given API is sufficient for it: 1) throw exception
> >> > immediately
> >> > >> to fail fast and stop the world, 2) log the error and drop record
> and
> >> > >> proceed silently, 3) send such errors to a specific "error" Kafka
> >> topic,
> >> > >> or
> >> > >> rec

Re: [DISCUSS]: KIP-159: Introducing Rich functions to Streams

2017-10-31 Thread Jeyhun Karimov
Hi,

I removed the 'commit()' feature, as we discussed. It simplified  the
overall design of KIP a lot.
If it is ok, I would like to start a VOTE thread.

Cheers,
Jeyhun

On Fri, Oct 27, 2017 at 5:28 PM Matthias J. Sax 
wrote:

> Thanks. I understand what you are saying, but I don't agree that
>
> > but also we need a commit() method
>
> I would just not provide `commit()` at DSL level and close the
> corresponding Jira as "not a problem" or similar.
>
>
> -Matthias
>
> On 10/27/17 3:42 PM, Jeyhun Karimov wrote:
> > Hi Matthias,
> >
> > Thanks for your comments. I agree that this is not the best way to do. A
> > bit of history behind this design.
> >
> > Prior doing this, I tried to provide ProcessorContext itself as an
> argument
> > in Rich interfaces. However, we dont want to give users that flexibility
> > and “power”. Moreover, ProcessorContext contains processor level
> > information and not Record level info. The only thing we need ij
> > ProcessorContext is commit() method.
> >
> > So, as far as I understood, we need recor context (offset, timestamp and
> > etc) but also we need a commit() method ( we dont want to provide
> > ProcessorContext as a parameter so users can use
> ProcessorContext.commit()
> > ).
> >
> > As a result, I thought to “propagate” commit() call from RecordContext to
> > ProcessorContext() .
> >
> >
> > If there is a misunderstanding in motvation/discussion of KIP/included
> > jiras please let me know.
> >
> >
> > Cheers,
> > Jeyhun
> >
> >
> > On Fri 27. Oct 2017 at 12:39, Matthias J. Sax 
> wrote:
> >
> >> I am personally still not convinced, that we should add `commit()` at
> all.
> >>
> >> @Guozhang: you created the original Jira. Can you elaborate a little
> >> bit? Isn't requesting commits a low level API that should not be exposed
> >> in the DSL? Just want to understand the motivation better. Why would
> >> anybody that uses the DSL ever want to request a commit? To me,
> >> requesting commits is useful if you manipulated state explicitly, ie,
> >> via Processor API.
> >>
> >> Also, for the solution: it seem rather unnatural to me, that we add
> >> `commit()` to `RecordContext` -- from my understanding, `RecordContext`
> >> is an helper object that provide access to record meta data. Requesting
> >> a commit is something quite different. Additionally, a commit does not
> >> commit a specific record but a `RecrodContext` is for a specific record.
> >>
> >> To me, this does not seem to be a sound API design if we follow this
> path.
> >>
> >>
> >> -Matthias
> >>
> >>
> >>
> >> On 10/26/17 10:41 PM, Jeyhun Karimov wrote:
> >>> Hi,
> >>>
> >>> Thanks for your suggestions.
> >>>
> >>> I have some comments, to make sure that there is no misunderstanding.
> >>>
> >>>
> >>> 1. Maybe we can deprecate the `commit()` in ProcessorContext, to
> enforce
>  user to consolidate this call as
>  "processorContext.recordContext().commit()". And internal
> implementation
>  of
>  `ProcessorContext.commit()` in `ProcessorContextImpl` is also changed
> to
>  this call.
> >>>
> >>>
> >>> - I think we should not deprecate `ProcessorContext.commit()`. The main
> >>> intuition that we introduce `commit()` in `RecordContext` is that,
> >>> `RecordContext` is the one which is provided in Rich interfaces. So if
> >> user
> >>> wants to commit, then there should be some method inside
> `RecordContext`
> >> to
> >>> do so. Internally, `RecordContext.commit()` calls
> >>> `ProcessorContext.commit()`  (see the last code snippet in KIP-159):
> >>>
> >>> @Override
> >>> public void process(final K1 key, final V1 value) {
> >>>
> >>> recordContext = new RecordContext() {   //
> >>> recordContext initialization is added in this KIP
> >>> @Override
> >>> public void commit() {
> >>> context().commit();
> >>> }
> >>>
> >>> @Override
> >>> public long offset() {
> >>> return context().recordContext().offset();
> >>> }
> >>>
> >>> @Override
> >>> public long timestamp() {
> >>> return context().recordContext().timestamp();
> >>> }
> >>>
> >>> @Override
> >>> public String topic() {
> >>> return context().recordContext().topic();
> >>> }
> >>>
> >>> @Override
> >>> public int partition() {
> >>> return context().recordContext().partition();
> >>> }
> >>>   };
> >>>
> >>>
> >>> So, we cannot deprecate `ProcessorContext.commit()` in this case IMO.
> >>>
> >>>
> >>> 2. Add the `task` reference to the impl class,
> `ProcessorRecordContext`,
> >> so
>  that it can implement the commit call itself.
> >>>
> >>>
> >>> - Actually, I don't think that we need `commit()` in
> >>> `ProcessorRecordContext`. The main intuition is to "transfer"
> >>> `ProcessorContext.commit()` call to Rich interfaces, to support
> >>> 

Re: [DISCUSS] KIP-210: Provide for custom error handling when Kafka Streams fails to produce

2017-10-31 Thread Matt Farmer
Thanks for the heads up. Yes, I think my changes are compatible with that
PR, but there will be a merge conflict that happens whenever one of the PRs
is merged. Happy to reconcile the changes in my PR if 4148 goes in first. :)

On Tue, Oct 31, 2017 at 6:44 PM Guozhang Wang  wrote:

> That sounds reasonable, thanks Matt.
>
> As for the implementation, please note that there is another ongoing PR
> that may touch the same classes that you are working on:
> https://github.com/apache/kafka/pull/4148
>
> So it may help if you can also take a look at that PR and see if it is
> compatible with your changes.
>
>
>
> Guozhang
>
>
> On Tue, Oct 31, 2017 at 10:59 AM, Matt Farmer  wrote:
>
> > I've opened this pull request to implement the KIP as currently written:
> > https://github.com/apache/kafka/pull/4165. It still needs some tests
> > added,
> > but largely represents the shape I was going for.
> >
> > If there are more points that folks would like to discuss, please let me
> > know. If I don't hear anything by tomorrow afternoon I'll probably start
> a
> > [VOTE] thread.
> >
> > Thanks,
> > Matt
> >
> > On Fri, Oct 27, 2017 at 7:33 PM Matt Farmer  wrote:
> >
> > > I can’t think of a reason that would be problematic.
> > >
> > > Most of the time I would write a handler like this, I either want to
> > > ignore the error or fail and bring everything down so that I can spin
> it
> > > back up later and resume from earlier offsets. When we start up after
> > > crashing we’ll eventually try to process the message we failed to
> produce
> > > again.
> > >
> > > I’m concerned that “putting in a queue for later” opens you up to
> putting
> > > messages into the destination topic in an unexpected order. However if
> > > others feel differently, I’m happy to talk about it.
> > >
> > > On Fri, Oct 27, 2017 at 7:10 PM Guozhang Wang 
> > wrote:
> > >
> > >> > Please correct me if I'm wrong, but my understanding is that the
> > record
> > >> > metadata is always null if an exception occurred while trying to
> > >> produce.
> > >>
> > >> That is right. Thanks.
> > >>
> > >> I looked at the example code, and one thing I realized that since we
> are
> > >> not passing the context in the handle function, we may not be
> implement
> > >> the
> > >> logic to send the fail records into another queue for future
> processing.
> > >> Would people think that would be a big issue?
> > >>
> > >>
> > >> Guozhang
> > >>
> > >>
> > >> On Thu, Oct 26, 2017 at 12:14 PM, Matt Farmer  wrote:
> > >>
> > >> > Hello all,
> > >> >
> > >> > I've updated the KIP based on this conversation, and made it so that
> > its
> > >> > interface, config setting, and parameters line up more closely with
> > the
> > >> > interface in KIP-161 (deserialization handler).
> > >> >
> > >> > I believe there are a few specific questions I need to reply to.
> > >> >
> > >> > > The question I had about then handle parameters are around the
> > record,
> > >> > > should it be `ProducerRecord`, or be generics of
> > >> > > `ProducerRecord` or `ProducerRecord > >> extends
> > >> > > Object, ? extends Object>`?
> > >> >
> > >> > At this point in the code we're guaranteed that this is a
> > >> > ProducerRecord, so the generics would just make it
> > >> harder
> > >> > to work with the key and value.
> > >> >
> > >> > > Also, should the handle function include the `RecordMetadata` as
> > well
> > >> in
> > >> > > case it is not null?
> > >> >
> > >> > Please correct me if I'm wrong, but my understanding is that the
> > record
> > >> > metadata is always null if an exception occurred while trying to
> > >> produce.
> > >> >
> > >> > > We may probably try to write down at least the following handling
> > >> logic
> > >> > and
> > >> > > see if the given API is sufficient for it
> > >> >
> > >> > I've added some examples to the KIP. Let me know what you think.
> > >> >
> > >> > Cheers,
> > >> > Matt
> > >> >
> > >> > On Mon, Oct 23, 2017 at 9:00 PM Matt Farmer  wrote:
> > >> >
> > >> > > Thanks for this feedback. I’m at a conference right now and am
> > >> planning
> > >> > on
> > >> > > updating the KIP again with details from this conversation later
> > this
> > >> > week.
> > >> > >
> > >> > > I’ll shoot you a more detailed response then! :)
> > >> > > On Mon, Oct 23, 2017 at 8:16 PM Guozhang Wang  >
> > >> > wrote:
> > >> > >
> > >> > >> Thanks for the KIP Matt.
> > >> > >>
> > >> > >> Regarding the handle interface of ProductionExceptionHandlerResp
> > onse,
> > >> > >> could
> > >> > >> you write it on the wiki also, along with the actual added config
> > >> names
> > >> > >> (e.g. what
> > >> > >>
> > >> > >>
> > >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-161%3A+streams+
> > >> > deserialization+exception+handlers
> > >> > >> described).
> > >> > >>
> > >> > >> The question I had about then handle parameters are around the
> > >> record,
> > >> > >> should it be `ProducerRecord`, or be generics of
> > >> > >> `ProducerRecord` or `ProducerRecord > >> extend

Re: [DISCUSS] KIP-217: Expose a timeout to allow an expired ZK session to be re-created

2017-10-31 Thread Jeff Widman
Agree with Stephane that it's worth at least taking a shot at trying to get
ZOOKEEPER-2184 fixed rather than adding a config that will be deprecated in
the not-too distant future.

I know Zookeeper development feels more like the turtle than the hare these
days, but Kafka is a high-visibility project, so there's a decent chance
you'll be able to get the attention of the zookeeper maintainers to get a
patch merged and possibly even a new release cut incorporating this fix.

On Tue, Oct 31, 2017 at 3:28 PM, Stephane Maarek <
steph...@simplemachines.com.au> wrote:

> Hi Jun,
>
> Thanks for the reply.
>
> 1) The reason I'm asking about it is I wonder if it's not worth focusing
> the development efforts on taking ownership of the existing PR (
> https://github.com/apache/zookeeper/pull/150)  to fix ZOOKEEPER-2184,
> rebase it and have it merged into the ZK codebase shortly.  I feel this KIP
> might introduce a setting that could be deprecated shortly and confuse the
> end user a bit further with one more knob to turn.
>
> 3) I'm not sure if I fully understand, sorry for the beginner's question:
> if the default timeout is infinite, then it won't change anything to how
> Kafka works from today, does it? (unless I'm missing something sorry). If
> not set to infinite, then we introduce the risk of a whole cluster shutting
> down at once?
>
> Thanks,
> Stephane
>
> On 31/10/17, 1:00 pm, "Jun Rao"  wrote:
>
> Hi, Stephane,
>
> Thanks for the reply.
>
> 1) Fixing the issue in ZK will be ideal. Not sure when it will happen
> though. Once it's fixed, we can probably deprecate this config.
>
> 2) That could be useful. Is there a java api to do that at runtime?
> Also,
> invalidating DNS cache doesn't always fix the issue of unresolved
> host. In
> some of the cases, human intervention is needed.
>
> 3) The default timeout is infinite though.
>
> Jun
>
>
> On Sat, Oct 28, 2017 at 11:48 PM, Stephane Maarek <
> steph...@simplemachines.com.au> wrote:
>
> > Hi Jun,
> >
> > I think this is very helpful. Restarting Kafka brokers in case of
> zookeeper
> > host change is not a well known operation.
> >
> > Few questions:
> > 1) would it not be worth fixing the problem at the source ? This has
> been
> > stuck for a while though, maybe a little push would help :
> > https://issues.apache.org/jira/plugins/servlet/mobile#
> issue/ZOOKEEPER-2184
> >
> > 2) upon recreating the zookeeper object , is it not possible to
> invalidate
> > the DNS cache so that it resolves the new hostname?
> >
> > 3) could the cluster be down in this situation: one migrates an
> entire
> > zookeeper cluster to new machines (one by one). The quorum is still
> alive
> > without downtime, but now every broker in a cluster can't resolve
> zookeeper
> > at the same time. They all shut down at the same time after the new
> > time-out setting.
> >
> > Thanks !
> > Stéphane
> >
> > On 28 Oct. 2017 9:42 am, "Jun Rao"  wrote:
> >
> > > Hi, Everyone,
> > >
> > > We created "KIP-217: Expose a timeout to allow an expired ZK
> session to
> > be
> > > re-created".
> > >
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > 217%3A+Expose+a+timeout+to+allow+an+expired+ZK+session+
> to+be+re-created
> > >
> > > Please take a look and provide your feedback.
> > >
> > > Thanks,
> > >
> > > Jun
> > >
> >
>
>
>
>


-- 

*Jeff Widman*
jeffwidman.com  | 740-WIDMAN-J (943-6265)
<><


Re: [DISCUSS] KIP-217: Expose a timeout to allow an expired ZK session to be re-created

2017-10-31 Thread Gwen Shapira
Fixing this in ZK won't be enough though. We'll need this included in a
stable release that we'll then bump Kafka's dependencies to include. I
doubt this KIP will be deprecated shortly even if the ZK bug is fixed
immediately.

On Tue, Oct 31, 2017 at 4:59 PM Jeff Widman  wrote:

> Agree with Stephane that it's worth at least taking a shot at trying to get
> ZOOKEEPER-2184 fixed rather than adding a config that will be deprecated in
> the not-too distant future.
>
> I know Zookeeper development feels more like the turtle than the hare these
> days, but Kafka is a high-visibility project, so there's a decent chance
> you'll be able to get the attention of the zookeeper maintainers to get a
> patch merged and possibly even a new release cut incorporating this fix.
>
> On Tue, Oct 31, 2017 at 3:28 PM, Stephane Maarek <
> steph...@simplemachines.com.au> wrote:
>
> > Hi Jun,
> >
> > Thanks for the reply.
> >
> > 1) The reason I'm asking about it is I wonder if it's not worth focusing
> > the development efforts on taking ownership of the existing PR (
> > https://github.com/apache/zookeeper/pull/150)  to fix ZOOKEEPER-2184,
> > rebase it and have it merged into the ZK codebase shortly.  I feel this
> KIP
> > might introduce a setting that could be deprecated shortly and confuse
> the
> > end user a bit further with one more knob to turn.
> >
> > 3) I'm not sure if I fully understand, sorry for the beginner's question:
> > if the default timeout is infinite, then it won't change anything to how
> > Kafka works from today, does it? (unless I'm missing something sorry). If
> > not set to infinite, then we introduce the risk of a whole cluster
> shutting
> > down at once?
> >
> > Thanks,
> > Stephane
> >
> > On 31/10/17, 1:00 pm, "Jun Rao"  wrote:
> >
> > Hi, Stephane,
> >
> > Thanks for the reply.
> >
> > 1) Fixing the issue in ZK will be ideal. Not sure when it will happen
> > though. Once it's fixed, we can probably deprecate this config.
> >
> > 2) That could be useful. Is there a java api to do that at runtime?
> > Also,
> > invalidating DNS cache doesn't always fix the issue of unresolved
> > host. In
> > some of the cases, human intervention is needed.
> >
> > 3) The default timeout is infinite though.
> >
> > Jun
> >
> >
> > On Sat, Oct 28, 2017 at 11:48 PM, Stephane Maarek <
> > steph...@simplemachines.com.au> wrote:
> >
> > > Hi Jun,
> > >
> > > I think this is very helpful. Restarting Kafka brokers in case of
> > zookeeper
> > > host change is not a well known operation.
> > >
> > > Few questions:
> > > 1) would it not be worth fixing the problem at the source ? This
> has
> > been
> > > stuck for a while though, maybe a little push would help :
> > > https://issues.apache.org/jira/plugins/servlet/mobile#
> > issue/ZOOKEEPER-2184
> > >
> > > 2) upon recreating the zookeeper object , is it not possible to
> > invalidate
> > > the DNS cache so that it resolves the new hostname?
> > >
> > > 3) could the cluster be down in this situation: one migrates an
> > entire
> > > zookeeper cluster to new machines (one by one). The quorum is still
> > alive
> > > without downtime, but now every broker in a cluster can't resolve
> > zookeeper
> > > at the same time. They all shut down at the same time after the new
> > > time-out setting.
> > >
> > > Thanks !
> > > Stéphane
> > >
> > > On 28 Oct. 2017 9:42 am, "Jun Rao"  wrote:
> > >
> > > > Hi, Everyone,
> > > >
> > > > We created "KIP-217: Expose a timeout to allow an expired ZK
> > session to
> > > be
> > > > re-created".
> > > >
> > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > 217%3A+Expose+a+timeout+to+allow+an+expired+ZK+session+
> > to+be+re-created
> > > >
> > > > Please take a look and provide your feedback.
> > > >
> > > > Thanks,
> > > >
> > > > Jun
> > > >
> > >
> >
> >
> >
> >
>
>
> --
>
> *Jeff Widman*
> jeffwidman.com  | 740-WIDMAN-J (943-6265)
> <><
>


[GitHub] kafka pull request #4126: KAFKA-6072: User ZookeeperClient in GroupCoordinat...

2017-10-31 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4126


---


[GitHub] kafka pull request #4166: KAFKA-6074 Use ZookeeperClient in ReplicaManager a...

2017-10-31 Thread tedyu
GitHub user tedyu opened a pull request:

https://github.com/apache/kafka/pull/4166

KAFKA-6074 Use ZookeeperClient in ReplicaManager and Partition



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tedyu/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4166.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4166


commit 81298fe6bfdce9dbd30c5c5601b8ea172c2e04c8
Author: tedyu 
Date:   2017-11-01T03:10:10Z

KAFKA-6074 Use ZookeeperClient in ReplicaManager and Partition




---


Re: [VOTE] 1.0.0 RC4

2017-10-31 Thread Satish Duggana
+1 (non-binding)
Verified signatures, ran tests on src dist.

Thanks,
Satish.


On Wed, Nov 1, 2017 at 12:37 AM, Jeff Chao  wrote:

> +1 (non-binding). We ran our usual performance and regression suite and
> found no noticeable negative impacts.
>
> - Jeff
> Heroku
>
> On Tue, Oct 31, 2017 at 8:54 AM, Ted Yu  wrote:
>
> > +1 (non-binding)
> >
> > Verified signatures.
> > Ran test suite.
> >
> > On Tue, Oct 31, 2017 at 8:53 AM, Manikumar 
> > wrote:
> >
> > > +1 (non-binding). Verified quickstart, ran producer/consumer perf
> > scripts,
> > > streams quickstart
> > > ran tests on src distribution.
> > >
> > > On Tue, Oct 31, 2017 at 8:42 PM, Ismael Juma 
> wrote:
> > >
> > > > +1 (binding) from me. Tested the quickstart with the source and
> binary
> > > > (Scala 2.12) artifacts, ran the tests on the source artifact and
> > verified
> > > > some signatures and hashes on source and binary (Scala 2.11)
> artifacts.
> > > >
> > > > Thanks for running the release, Guozhang!
> > > >
> > > > Ismael
> > > >
> > > > On Fri, Oct 27, 2017 at 6:28 PM, Guozhang Wang 
> > > wrote:
> > > >
> > > > > Hello Kafka users, developers and client-developers,
> > > > >
> > > > > This is the fifth candidate for release of Apache Kafka 1.0.0. The
> > main
> > > > PRs
> > > > > that gets merged in after RC3 are the following:
> > > > >
> > > > > *https://github.com/apache/kafka/commit/def1a768a6301c14ad66
> > 11358716ab
> > > > > 03de04e76b
> > > > >  > 11358716ab
> > > > > 03de04e76b>*
> > > > >
> > > > > *https://github.com/apache/kafka/commit/b9fc0f2e6892062efa1f
> > ff0c6f7bfc
> > > > > 683c8ba7ab
> > > > >  > ff0c6f7bfc
> > > > > 683c8ba7ab>*
> > > > >
> > > > > *https://github.com/apache/kafka/commit/a51fdcd2ee7efbd14857
> > 448a2fb7ec
> > > > > b71531e1f9
> > > > >  > 448a2fb7ec
> > > > > b71531e1f9>*
> > > > >
> > > > > *https://github.com/apache/kafka/commit/109a60c77a56d4afed48
> > 8c3ba35dc8
> > > > > 459fde15ce
> > > > >  > 8c3ba35dc8
> > > > > 459fde15ce>*
> > > > >
> > > > > It's worth noting that starting in this version we are using a
> > > different
> > > > > version protocol with three digits: *major.minor.bug-fix*
> > > > >
> > > > > Any and all testing is welcome, but the following areas are worth
> > > > > highlighting:
> > > > >
> > > > > 1. Client developers should verify that their clients can
> > > produce/consume
> > > > > to/from 1.0.0 brokers (ideally with compressed and uncompressed
> > data).
> > > > > 2. Performance and stress testing. Heroku and LinkedIn have helped
> > with
> > > > > this in the past (and issues have been found and fixed).
> > > > > 3. End users can verify that their apps work correctly with the new
> > > > > release.
> > > > >
> > > > > This is a major version release of Apache Kafka. It includes 29 new
> > > KIPs.
> > > > > See the release notes and release plan
> > > > > (*https://cwiki.apache.org/confluence/pages/viewpage.
> > > > > action?pageId=71764913
> > > > >  > > > pageId=71764913
> > > > > >*)
> > > > > for more details. A few feature highlights:
> > > > >
> > > > > * Java 9 support with significantly faster TLS and CRC32C
> > > implementations
> > > > > * JBOD improvements: disk failure only disables failed disk but not
> > the
> > > > > broker (KIP-112/KIP-113 part I)
> > > > > * Controller improvements: reduced logging change to greatly
> > accelerate
> > > > > admin request handling.
> > > > > * Newly added metrics across all the modules (KIP-164, KIP-168,
> > > KIP-187,
> > > > > KIP-188, KIP-196)
> > > > > * Kafka Streams API improvements (KIP-120 / 130 / 138 / 150 / 160 /
> > > 161),
> > > > > and drop compatibility "Evolving" annotations
> > > > >
> > > > > Release notes for the 1.0.0 release:
> > > > > *http://home.apache.org/~guozhang/kafka-1.0.0-rc4/
> RELEASE_NOTES.html
> > > > >  RELEASE_NOTES.html
> > >*
> > > > >
> > > > >
> > > > >
> > > > > *** Please download, test and vote by Tuesday, October 31, 8pm PT
> > > > >
> > > > > Kafka's KEYS file containing PGP keys we use to sign the release:
> > > > > http://kafka.apache.org/KEYS
> > > > >
> > > > > * Release artifacts to be voted upon (source and binary):
> > > > > *http://home.apache.org/~guozhang/kafka-1.0.0-rc4/
> > > > > *
> > > > >
> > > > > * Maven artifacts to be voted upon:
> > > > > https://repository.apache.org/content/groups/staging/org/apa
> > che/kafka/
> > > > >
> > > > > * Javadoc:
> > > > > *http://home.apache.org/~guozhang/kafka-1.0.0-rc4/javadoc/
> > > > > *
> > > > >
> > > > > * Tag to be voted upon (off 1.0 branch) is the 1

Re: [VOTE] 1.0.0 RC4

2017-10-31 Thread Guozhang Wang
+1 from myself.

We have passed the voting deadline with 3 binding +1s (Jason, Isamel and
myself) and 5 non-binding +1s (Manikumar, Ted, Jeff, Vahid, Satish). I am
closing this voting thread and move on to the final release now.


Guozhang


On Tue, Oct 31, 2017 at 8:48 PM, Satish Duggana 
wrote:

> +1 (non-binding)
> Verified signatures, ran tests on src dist.
>
> Thanks,
> Satish.
>
>
> On Wed, Nov 1, 2017 at 12:37 AM, Jeff Chao  wrote:
>
> > +1 (non-binding). We ran our usual performance and regression suite and
> > found no noticeable negative impacts.
> >
> > - Jeff
> > Heroku
> >
> > On Tue, Oct 31, 2017 at 8:54 AM, Ted Yu  wrote:
> >
> > > +1 (non-binding)
> > >
> > > Verified signatures.
> > > Ran test suite.
> > >
> > > On Tue, Oct 31, 2017 at 8:53 AM, Manikumar 
> > > wrote:
> > >
> > > > +1 (non-binding). Verified quickstart, ran producer/consumer perf
> > > scripts,
> > > > streams quickstart
> > > > ran tests on src distribution.
> > > >
> > > > On Tue, Oct 31, 2017 at 8:42 PM, Ismael Juma 
> > wrote:
> > > >
> > > > > +1 (binding) from me. Tested the quickstart with the source and
> > binary
> > > > > (Scala 2.12) artifacts, ran the tests on the source artifact and
> > > verified
> > > > > some signatures and hashes on source and binary (Scala 2.11)
> > artifacts.
> > > > >
> > > > > Thanks for running the release, Guozhang!
> > > > >
> > > > > Ismael
> > > > >
> > > > > On Fri, Oct 27, 2017 at 6:28 PM, Guozhang Wang  >
> > > > wrote:
> > > > >
> > > > > > Hello Kafka users, developers and client-developers,
> > > > > >
> > > > > > This is the fifth candidate for release of Apache Kafka 1.0.0.
> The
> > > main
> > > > > PRs
> > > > > > that gets merged in after RC3 are the following:
> > > > > >
> > > > > > *https://github.com/apache/kafka/commit/def1a768a6301c14ad66
> > > 11358716ab
> > > > > > 03de04e76b
> > > > > >  > > 11358716ab
> > > > > > 03de04e76b>*
> > > > > >
> > > > > > *https://github.com/apache/kafka/commit/b9fc0f2e6892062efa1f
> > > ff0c6f7bfc
> > > > > > 683c8ba7ab
> > > > > >  > > ff0c6f7bfc
> > > > > > 683c8ba7ab>*
> > > > > >
> > > > > > *https://github.com/apache/kafka/commit/a51fdcd2ee7efbd14857
> > > 448a2fb7ec
> > > > > > b71531e1f9
> > > > > >  > > 448a2fb7ec
> > > > > > b71531e1f9>*
> > > > > >
> > > > > > *https://github.com/apache/kafka/commit/109a60c77a56d4afed48
> > > 8c3ba35dc8
> > > > > > 459fde15ce
> > > > > >  > > 8c3ba35dc8
> > > > > > 459fde15ce>*
> > > > > >
> > > > > > It's worth noting that starting in this version we are using a
> > > > different
> > > > > > version protocol with three digits: *major.minor.bug-fix*
> > > > > >
> > > > > > Any and all testing is welcome, but the following areas are worth
> > > > > > highlighting:
> > > > > >
> > > > > > 1. Client developers should verify that their clients can
> > > > produce/consume
> > > > > > to/from 1.0.0 brokers (ideally with compressed and uncompressed
> > > data).
> > > > > > 2. Performance and stress testing. Heroku and LinkedIn have
> helped
> > > with
> > > > > > this in the past (and issues have been found and fixed).
> > > > > > 3. End users can verify that their apps work correctly with the
> new
> > > > > > release.
> > > > > >
> > > > > > This is a major version release of Apache Kafka. It includes 29
> new
> > > > KIPs.
> > > > > > See the release notes and release plan
> > > > > > (*https://cwiki.apache.org/confluence/pages/viewpage.
> > > > > > action?pageId=71764913
> > > > > >  > > > > pageId=71764913
> > > > > > >*)
> > > > > > for more details. A few feature highlights:
> > > > > >
> > > > > > * Java 9 support with significantly faster TLS and CRC32C
> > > > implementations
> > > > > > * JBOD improvements: disk failure only disables failed disk but
> not
> > > the
> > > > > > broker (KIP-112/KIP-113 part I)
> > > > > > * Controller improvements: reduced logging change to greatly
> > > accelerate
> > > > > > admin request handling.
> > > > > > * Newly added metrics across all the modules (KIP-164, KIP-168,
> > > > KIP-187,
> > > > > > KIP-188, KIP-196)
> > > > > > * Kafka Streams API improvements (KIP-120 / 130 / 138 / 150 /
> 160 /
> > > > 161),
> > > > > > and drop compatibility "Evolving" annotations
> > > > > >
> > > > > > Release notes for the 1.0.0 release:
> > > > > > *http://home.apache.org/~guozhang/kafka-1.0.0-rc4/
> > RELEASE_NOTES.html
> > > > > >  > RELEASE_NOTES.html
> > > >*
> > > > > >
> > > > > >
> > > > > >
> > > > > > *** Please download, test and vote by Tuesday, October 31, 8pm PT
> > > > > >
> > > > > > Kafka's KEYS file containing PGP keys we use to sign the release:
> > > > > > http://kafka.apache.org/KEYS

[RESULTS] [VOTE] Release Kafka version 1.0.0

2017-10-31 Thread Guozhang Wang
The vote on RC4 passes with 8 +1 votes (3 bindings) and no 0 or -1 votes.

+1 votes
PMC Members:
* Jason Gustafson
* Ismael Juma
* Guozhang Wang

Community:
* Manikumar Reddy
* Ted Yu
* Jeff Chao
* Vahid Hashemian
* Satish Guggana

0 votes
* No votes

-1 votes
* No votes

Vote thread:
http://mail-archives.apache.org/mod_mbox/kafka-users/201710.mbox/%3CCAHwHRrXG66QSSA%3DUQUFRGQ0b-RF_q7zBTWc_Mp5%3Df%3D8Ac7oDrg%40mail.gmail.com%3E


I'll continue with the release process and the release announcement will
follow next.


-- Guozhang


Build failed in Jenkins: kafka-trunk-jdk9 #164

2017-10-31 Thread Apache Jenkins Server
See 


Changes:

[junrao] KAFKA-6072; User ZookeeperClient in GroupCoordinator and

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H25 (couchdbtest ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision f88fdbd3115cdb0f1bd26817513f3d33359512b1 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f f88fdbd3115cdb0f1bd26817513f3d33359512b1
Commit message: "KAFKA-6072; User ZookeeperClient in GroupCoordinator and 
TransactionCoordinator"
 > git rev-list 51787027159f6f206df928a5c8bd2a18bacd3d5c # timeout=10
Setting 
GRADLE_3_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_3.4-rc-2
[kafka-trunk-jdk9] $ /bin/bash -xe /tmp/jenkins2246648700180088599.sh
+ rm -rf 
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_3.4-rc-2/bin/gradle

FAILURE: Build failed with an exception.

* What went wrong:
Could not determine java version from '9.0.1'.

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_3_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_3.4-rc-2
ERROR: Step ?Publish JUnit test result report? failed: Test reports were found 
but none of them are new. Did tests run? 
For example, 

 is 4 days 15 hr old

Setting 
GRADLE_3_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_3.4-rc-2
Not sending mail to unregistered user wangg...@gmail.com


wiki access for KIP

2017-10-31 Thread Steven Aerts
I hereby would like to request write access to the wiki to create a
KIP for KAFKA-6018.

My wiki id is steven.aerts.

Thanks,


   Steven