Jenkins build is back to normal : kafka-2.0-jdk8 #54

2018-06-22 Thread Apache Jenkins Server
See 




Re: [VOTE] 1.1.1 RC1

2018-06-22 Thread Dong Lin
Thank you for testing and voting the release!

I noticed that the date for 1.1.1-rc1 is wrong. Please kindly test and vote
by Tuesday, June 26, 12 pm PT.

Thanks,
Dong

On Fri, Jun 22, 2018 at 10:09 AM, Dong Lin  wrote:

> Hello Kafka users, developers and client-developers,
>
> This is the second candidate for release of Apache Kafka 1.1.1.
>
> Apache Kafka 1.1.1 is a bug-fix release for the 1.1 branch that was first
> released with 1.1.0 about 3 months ago. We have fixed about 25 issues since
> that release. A few of the more significant fixes include:
>
> KAFKA-6925  - Fix
> memory leak in StreamsMetricsThreadImpl
> KAFKA-6937  - In-sync
> replica delayed during fetch if replica throttle is exceeded
> KAFKA-6917  - Process
> txn completion asynchronously to avoid deadlock
> KAFKA-6893  - Create
> processors before starting acceptor to avoid ArithmeticException
> KAFKA-6870  -
> Fix ConcurrentModificationException in SampledStat
> KAFKA-6878  - Fix
> NullPointerException when querying global state store
> KAFKA-6879  - Invoke
> session init callbacks outside lock to avoid Controller deadlock
> KAFKA-6857  - Prevent
> follower from truncating to the wrong offset if undefined leader epoch is
> requested
> KAFKA-6854  - Log
> cleaner fails with transaction markers that are deleted during clean
> KAFKA-6747  - Check
> whether there is in-flight transaction before aborting transaction
> KAFKA-6748  - Double
> check before scheduling a new task after the punctuate call
> KAFKA-6739  -
> Fix IllegalArgumentException when down-converting from V2 to V0/V1
> KAFKA-6728  -
> Fix NullPointerException when instantiating the HeaderConverter
>
> Kafka 1.1.1 release plan:
> https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+1.1.1
>
> Release notes for the 1.1.1 release:
> http://home.apache.org/~lindong/kafka-1.1.1-rc1/RELEASE_NOTES.html
>
> *** Please download, test and vote by Thursday, Jun 22, 12pm PT ***
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> http://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> http://home.apache.org/~lindong/kafka-1.1.1-rc1/
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/
>
> * Javadoc:
> http://home.apache.org/~lindong/kafka-1.1.1-rc1/javadoc/
>
> * Tag to be voted upon (off 1.1 branch) is the 1.1.1-rc1 tag:
> https://github.com/apache/kafka/tree/1.1.1-rc1
>
> * Documentation:
> http://kafka.apache.org/11/documentation.html
>
> * Protocol:
> http://kafka.apache.org/11/protocol.html
>
> * Successful Jenkins builds for the 1.1 branch:
> Unit/integration tests: *https://builds.apache.org/job/kafka-1.1-jdk7/152/
> *
> System tests: https://jenkins.confluent.io/job/system-test-kafka-br
> anch-builder/1817
>
>
> Please test and verify the release artifacts and submit a vote for this RC,
> or report any issues so we can fix them and get a new RC out ASAP. Although
> this release vote requires PMC votes to pass, testing, votes, and bug
> reports are valuable and appreciated from everyone.
>
> Cheers,
> Dong
>
>
>


[VOTE] 0.10.2.2 RC1

2018-06-22 Thread Matthias J. Sax
Hello Kafka users, developers and client-developers,

This is the second candidate for release of Apache Kafka 0.10.2.2.

Note, that RC0 was created before the upgrade to Gradle 4.8.1 and thus,
we discarded it in favor of RC1 (without sending out a email for RC0).

This is a bug fix release closing 29 tickets:
https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+0.10.2.2

Release notes for the 0.10.2.2 release:
http://home.apache.org/~mjsax/kafka-0.10.2.2-rc1/RELEASE_NOTES.html

*** Please download, test and vote by Tuesday, 6/26/18 end-of-day, so we
can close the vote on Wednesday.

Kafka's KEYS file containing PGP keys we use to sign the release:
http://kafka.apache.org/KEYS

* Release artifacts to be voted upon (source and binary):
http://home.apache.org/~mjsax/kafka-0.10.2.2-rc1/

* Maven artifacts to be voted upon:
https://repository.apache.org/content/groups/staging/

* Javadoc:
http://home.apache.org/~mjsax/kafka-0.10.2.2-rc1/javadoc/

* Tag to be voted upon (off 0.10.2 branch) is the 0.10.2.2 tag:
https://github.com/apache/kafka/releases/tag/0.10.2.2-rc1

* Documentation:
http://kafka.apache.org/0102/documentation.html

* Protocol:
http://kafka.apache.org/0102/protocol.html

* Successful Jenkins builds for the 0.10.2 branch:
Unit/integration tests: https://builds.apache.org/job/kafka-0.10.2-jdk7/220/

/**

Thanks,
  -Matthias



signature.asc
Description: OpenPGP digital signature


Build failed in Jenkins: kafka-1.1-jdk7 #154

2018-06-22 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] MINOR: bugfix streams total metrics (#5277)

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H30 (ubuntu xenial) in workspace 

Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/1.1^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/1.1^{commit} # timeout=10
Checking out Revision 22346636641e9a5708c5f8e8d31123a6c13dbf2f 
(refs/remotes/origin/1.1)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 22346636641e9a5708c5f8e8d31123a6c13dbf2f
Commit message: "MINOR: bugfix streams total metrics (#5277)"
 > git rev-list --no-walk ae3dd56682b6b58409563b477b3bf19aaeb4d353 # timeout=10
Setting GRADLE_3_5_HOME=/home/jenkins/tools/gradle/3.5
[kafka-1.1-jdk7] $ /bin/bash -xe /tmp/jenkins458646782881646.sh
+ rm -rf 
+ /home/jenkins/tools/gradle/3.5/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/3.5/userguide/gradle_daemon.html.
Daemon will be stopped at the end of the build stopping after processing
Download 
https://jcenter.bintray.com/com/github/jengelman/gradle/plugins/shadow/2.0.2/shadow-2.0.2.pom
Download 
https://jcenter.bintray.com/org/owasp/dependency-check-gradle/3.1.1/dependency-check-gradle-3.1.1.pom
Download 
https://jcenter.bintray.com/org/owasp/dependency-check-core/3.1.1/dependency-check-core-3.1.1.pom
Download 
https://jcenter.bintray.com/org/owasp/dependency-check-parent/3.1.1/dependency-check-parent-3.1.1.pom
Download 
https://jcenter.bintray.com/org/owasp/dependency-check-utils/3.1.1/dependency-check-utils-3.1.1.pom
Download 
https://jcenter.bintray.com/com/vdurmont/semver4j/2.1.0/semver4j-2.1.0.pom
Download 
https://jcenter.bintray.com/com/sun/mail/mailapi/1.6.0/mailapi-1.6.0.pom
Download 
https://jcenter.bintray.com/com/github/jengelman/gradle/plugins/shadow/2.0.2/shadow-2.0.2.jar
Download 
https://jcenter.bintray.com/org/owasp/dependency-check-gradle/3.1.1/dependency-check-gradle-3.1.1.jar
Download 
https://jcenter.bintray.com/org/owasp/dependency-check-core/3.1.1/dependency-check-core-3.1.1.jar
Download 
https://jcenter.bintray.com/org/owasp/dependency-check-utils/3.1.1/dependency-check-utils-3.1.1.jar
Download 
https://jcenter.bintray.com/com/vdurmont/semver4j/2.1.0/semver4j-2.1.0.jar
Download 
https://jcenter.bintray.com/com/sun/mail/mailapi/1.6.0/mailapi-1.6.0.jar
Building project 'core' with Scala version 2.11.12

FAILURE: Build failed with an exception.

* What went wrong:
A problem occurred configuring project ':core'.
> Could not resolve all dependencies for configuration ':core:scoverage'.
   > Could not resolve org.scoverage:scalac-scoverage-plugin_2.11:1.3.1.
 Required by:
 project :core
  > Could not resolve org.scoverage:scalac-scoverage-plugin_2.11:1.3.1.
 > Could not get resource 
'https://repo1.maven.org/maven2/org/scoverage/scalac-scoverage-plugin_2.11/1.3.1/scalac-scoverage-plugin_2.11-1.3.1.pom'.
> Could not HEAD 
'https://repo1.maven.org/maven2/org/scoverage/scalac-scoverage-plugin_2.11/1.3.1/scalac-scoverage-plugin_2.11-1.3.1.pom'.
   > Received fatal alert: protocol_version
   > Could not resolve org.scoverage:scalac-scoverage-runtime_2.11:1.3.1.
 Required by:
 project :core
  > Could not resolve org.scoverage:scalac-scoverage-runtime_2.11:1.3.1.
 > Could not get resource 
'https://repo1.maven.org/maven2/org/scoverage/scalac-scoverage-runtime_2.11/1.3.1/scalac-scoverage-runtime_2.11-1.3.1.pom'.
> Could not HEAD 
'https://repo1.maven.org/maven2/org/scoverage/scalac-scoverage-runtime_2.11/1.3.1/scalac-scoverage-runtime_2.11-1.3.1.pom'.
   > Received fatal alert: protocol_version

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 50.554 secs
Build step 'Execute shell' marked build as failure

Re: [Discuss] KIP-321: Add method to get TopicNameExtractor in TopologyDescription

2018-06-22 Thread Guozhang Wang
Thanks for writing the KIP!

I'm +1 on the proposed changes over all. One minor suggestion: we should
also mention that the `Sink#toString` will also be updated, in a way that
if `topic()` returns null, use the other call, etc. This is because
although we do not explicitly state the following logic as public protocols:

```

"Sink: " + name + " (topic: " + topic() + ")\n  <-- " +
nodeNames(predecessors);


```

There are already some users that rely on `topology.describe().toString()`
to have runtime checks on the returned string values, so changing this
means that their app will break and hence need to make code changes.

Guozhang

On Wed, Jun 20, 2018 at 7:20 PM, Nishanth Pradeep 
wrote:

> Hello Everyone,
>
> I have created a new KIP to discuss extending TopologyDescription. You can
> find it here:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 321%3A+Add+method+to+get+TopicNameExtractor+in+TopologyDescription
>
> Please provide any feedback that you might have.
>
> Best,
> Nishanth Pradeep
>



-- 
-- Guozhang


Re: [DISCUSS] - KIP-314: KTable to GlobalKTable Bi-directional Join

2018-06-22 Thread Guozhang Wang
Hello Adam,

Please see my comments inline.

On Thu, Jun 21, 2018 at 8:14 AM, Adam Bellemare 
wrote:

> Hi Guozhang
>
> *Re: Questions*
> *1)* I do not yet have a solution to this, but I also did not look that
> closely at it when I begun this KIP. I admit that I was unaware of exactly
> how the GlobalKTable worked alongside the KTable/KStream topologies. You
> mention "It means the two topologies will be merged, and that merged
> topology can only be executed as a single task, by a single thread. " - is
> the problem here that the merged topology would be parallelized to other
> threads/instances? While I am becoming familiar with how the topologies are
> created under the hood, I am not yet fully clear on the implications of
> your statement. I will look into this further.
>
>
Yes. The issue is that today each task is executed by a single thread only
at any given time, and hence any state stores are only accessed by a single
thread (except for interactive queries, and for global tables where the
global update thread write to the global store, and the local thread read
from the global store), if we let the global store update thread to be also
triggering joins and puts send the results into the downstream operators,
then it means that the global store update thread can access on any state
stores in the subsequent part of the topology, breaking our current
threading model.


> *2)* " do you mean that although we have a duplicated state store:
> ModifiedEvents in addition to the original Events with only the enhanced
> key, this is not avoidable anyways even if we do re-keying?" Yes, that is
> correct, that is what I meant. I need to improve my knowledge around this
> component too. I have been browsing the KIP-213 discussion thread and
> looking at Jan's code
>
> *Re: Comments*
> *1) *Makes sense. I will update the diagram accordingly. Thanks!
>
> *2)* Wouldn't outer join require that we emit records from the right
> GlobalKTable that have no match in the left KTable? This seems undefined to
> me with the current proposal (above issues aside), since multiple threads
> would be producing the same output event for a single GlobalKTable update.
>
>
I was considering mainly about the semantics of table-table joins, that
whether we should add this operator inside our API. Implementation wise, we
will only have one global store update thread per instance, so there will
not be multiple threads producing the same output, but still there would be
other issues that we should consider indeed, as mentioned above. Again this
comment is not about implementations, but API wise if it is desirable to
add it.


>
> Questions for you both:
> Q1) Is a KTable always materialized? I am looking at the code under the
> hood, and it seems to me that it's either materialized with an explicit
> Materialized object, or it's given an anonymous name and the default serdes
> are used. Am I correct in this observation?
>
>
A KTable is not always materialized. For example, a KTable generated from
`KTable#filter` or `KTable#mapValues` does not create a new materialized
state store, but we use the caller `KTable` 's state store for anyone who
wants to query it in joins.

Moving forward, we are also trying to optimize the topology to only
"logically" materialize a KTable when necessary, this is summarized in
https://issues.apache.org/jira/browse/KAFKA-6761


>
> Thanks,
> Adam
>
>

-- 
-- Guozhang


Re: [DISCUSS] KIP-319: Replace segments with segmentSize in WindowBytesStoreSupplier

2018-06-22 Thread Guozhang Wang
Thanks John.

On Fri, Jun 22, 2018 at 5:05 PM, John Roesler  wrote:

> Thanks for the feedback, Bill and Guozhang,
>
> I've updated the KIP accordingly.
>
> Thanks,
> -John
>
> On Fri, Jun 22, 2018 at 5:51 PM Guozhang Wang  wrote:
>
> > Thanks for the KIP. I'm +1 on the proposal. One minor comment on the
> wiki:
> > the `In Windows, we will:` section code snippet is empty.
> >
> > On Fri, Jun 22, 2018 at 3:29 PM, Bill Bejeck  wrote:
> >
> > > Hi John,
> > >
> > > Thanks for the KIP, and overall it's a +1 for me.
> > >
> > > In the JavaDoc for the segmentInterval method, there's no mention of
> the
> > > number of segments there can be at any one time.  While it's implied
> that
> > > the number of segments is potentially unbounded, would be better to
> > > explicitly state that the previous limit on the number of segments is
> > going
> > > to be removed as well?
> > >
> > > I have a couple of nit comments.   The method name is still segmentSize
> > in
> > > the code block vs segmentInterval and the order of the parameters for
> the
> > > third persistentWindowStore don't match the order in the JavaDoc.
> > >
> > > Thanks,
> > > Bill
> > >
> > >
> > >
> > > On Thu, Jun 21, 2018 at 3:32 PM John Roesler 
> wrote:
> > >
> > > > I've updated the KIP and draft PR accordingly.
> > > >
> > > > On Thu, Jun 21, 2018 at 2:03 PM John Roesler 
> > wrote:
> > > >
> > > > > Interesting... I did not initially consider it because I didn't
> want
> > to
> > > > > have an impact on anyone's Streams apps, but now I see that unless
> > > > > developers have subclassed `Windows`, the number of segments would
> > > always
> > > > > be 3!
> > > > >
> > > > > There's one caveat to this, which I think was a mistake. The field
> > > > > `segments` in Windows is public, which means that anyone can
> actually
> > > set
> > > > > it directly on any Window instance like:
> > > > >
> > > > > TimeWindows tw = TimeWindows.of(100);
> > > > > tw.segments = 12345;
> > > > >
> > > > > Bypassing the bounds check and contradicting the javadoc in Windows
> > > that
> > > > > says users can't directly set it. Sadly there's no way to
> "deprecate"
> > > > this
> > > > > exposure, so I propose just to make it private.
> > > > >
> > > > > With this new knowledge, I agree, I think we can switch to
> > > > > "segmentInterval" throughout the interface.
> > > > >
> > > > > On Wed, Jun 20, 2018 at 5:06 PM Guozhang Wang 
> > > > wrote:
> > > > >
> > > > >> Hello John,
> > > > >>
> > > > >> Thanks for the KIP.
> > > > >>
> > > > >> Should we consider making the change on
> > `Stores#persistentWindowStore`
> > > > >> parameters as well?
> > > > >>
> > > > >>
> > > > >> Guozhang
> > > > >>
> > > > >>
> > > > >> On Wed, Jun 20, 2018 at 1:31 PM, John Roesler 
> > > > wrote:
> > > > >>
> > > > >> > Hi Ted,
> > > > >> >
> > > > >> > Ah, when you made that comment to me before, I thought you meant
> > as
> > > > >> opposed
> > > > >> > to "segments". Now it makes sense that you meant as opposed to
> > > > >> > "segmentSize".
> > > > >> >
> > > > >> > I named it that way to match the peer method "windowSize", which
> > is
> > > > >> also a
> > > > >> > quantity of milliseconds.
> > > > >> >
> > > > >> > I agree that "interval" is more intuitive, but I think I favor
> > > > >> consistency
> > > > >> > in this case. Does that seem reasonable?
> > > > >> >
> > > > >> > Thanks,
> > > > >> > -John
> > > > >> >
> > > > >> > On Wed, Jun 20, 2018 at 1:06 PM Ted Yu 
> > wrote:
> > > > >> >
> > > > >> > > Normally size is not measured in time unit, such as
> > milliseconds.
> > > > >> > > How about naming the new method segmentInterval ?
> > > > >> > > Thanks
> > > > >> > >  Original message From: John Roesler <
> > > > >> j...@confluent.io>
> > > > >> > > Date: 6/20/18  10:45 AM  (GMT-08:00) To: dev@kafka.apache.org
> > > > >> Subject:
> > > > >> > > [DISCUSS] KIP-319: Replace segments with segmentSize in
> > > > >> > > WindowBytesStoreSupplier
> > > > >> > > Hello All,
> > > > >> > >
> > > > >> > > I'd like to propose KIP-319 to fix an issue I identified in
> > > > >> KAFKA-7080.
> > > > >> > > Specifically, we're creating CachingWindowStore with the
> *number
> > > of
> > > > >> > > segments* instead of the *segment size*.
> > > > >> > >
> > > > >> > > Here's the jira:
> > https://issues.apache.org/jira/browse/KAFKA-7080
> > > > >> > > Here's the KIP: https://cwiki.apache.org/confluence/x/mQU0BQ
> > > > >> > >
> > > > >> > > additionally, here's a draft PR for clarity:
> > > > >> > > https://github.com/apache/kafka/pull/5257
> > > > >> > >
> > > > >> > > Please let me know what you think!
> > > > >> > >
> > > > >> > > Thanks,
> > > > >> > > -John
> > > > >> > >
> > > > >> >
> > > > >>
> > > > >>
> > > > >>
> > > > >> --
> > > > >> -- Guozhang
> > > > >>
> > > > >
> > > >
> > >
> >
> >
> >
> > --
> > -- Guozhang
> >
>



-- 
-- Guozhang


Re: [DISCUSS] KIP-319: Replace segments with segmentSize in WindowBytesStoreSupplier

2018-06-22 Thread John Roesler
Thanks for the feedback, Bill and Guozhang,

I've updated the KIP accordingly.

Thanks,
-John

On Fri, Jun 22, 2018 at 5:51 PM Guozhang Wang  wrote:

> Thanks for the KIP. I'm +1 on the proposal. One minor comment on the wiki:
> the `In Windows, we will:` section code snippet is empty.
>
> On Fri, Jun 22, 2018 at 3:29 PM, Bill Bejeck  wrote:
>
> > Hi John,
> >
> > Thanks for the KIP, and overall it's a +1 for me.
> >
> > In the JavaDoc for the segmentInterval method, there's no mention of the
> > number of segments there can be at any one time.  While it's implied that
> > the number of segments is potentially unbounded, would be better to
> > explicitly state that the previous limit on the number of segments is
> going
> > to be removed as well?
> >
> > I have a couple of nit comments.   The method name is still segmentSize
> in
> > the code block vs segmentInterval and the order of the parameters for the
> > third persistentWindowStore don't match the order in the JavaDoc.
> >
> > Thanks,
> > Bill
> >
> >
> >
> > On Thu, Jun 21, 2018 at 3:32 PM John Roesler  wrote:
> >
> > > I've updated the KIP and draft PR accordingly.
> > >
> > > On Thu, Jun 21, 2018 at 2:03 PM John Roesler 
> wrote:
> > >
> > > > Interesting... I did not initially consider it because I didn't want
> to
> > > > have an impact on anyone's Streams apps, but now I see that unless
> > > > developers have subclassed `Windows`, the number of segments would
> > always
> > > > be 3!
> > > >
> > > > There's one caveat to this, which I think was a mistake. The field
> > > > `segments` in Windows is public, which means that anyone can actually
> > set
> > > > it directly on any Window instance like:
> > > >
> > > > TimeWindows tw = TimeWindows.of(100);
> > > > tw.segments = 12345;
> > > >
> > > > Bypassing the bounds check and contradicting the javadoc in Windows
> > that
> > > > says users can't directly set it. Sadly there's no way to "deprecate"
> > > this
> > > > exposure, so I propose just to make it private.
> > > >
> > > > With this new knowledge, I agree, I think we can switch to
> > > > "segmentInterval" throughout the interface.
> > > >
> > > > On Wed, Jun 20, 2018 at 5:06 PM Guozhang Wang 
> > > wrote:
> > > >
> > > >> Hello John,
> > > >>
> > > >> Thanks for the KIP.
> > > >>
> > > >> Should we consider making the change on
> `Stores#persistentWindowStore`
> > > >> parameters as well?
> > > >>
> > > >>
> > > >> Guozhang
> > > >>
> > > >>
> > > >> On Wed, Jun 20, 2018 at 1:31 PM, John Roesler 
> > > wrote:
> > > >>
> > > >> > Hi Ted,
> > > >> >
> > > >> > Ah, when you made that comment to me before, I thought you meant
> as
> > > >> opposed
> > > >> > to "segments". Now it makes sense that you meant as opposed to
> > > >> > "segmentSize".
> > > >> >
> > > >> > I named it that way to match the peer method "windowSize", which
> is
> > > >> also a
> > > >> > quantity of milliseconds.
> > > >> >
> > > >> > I agree that "interval" is more intuitive, but I think I favor
> > > >> consistency
> > > >> > in this case. Does that seem reasonable?
> > > >> >
> > > >> > Thanks,
> > > >> > -John
> > > >> >
> > > >> > On Wed, Jun 20, 2018 at 1:06 PM Ted Yu 
> wrote:
> > > >> >
> > > >> > > Normally size is not measured in time unit, such as
> milliseconds.
> > > >> > > How about naming the new method segmentInterval ?
> > > >> > > Thanks
> > > >> > >  Original message From: John Roesler <
> > > >> j...@confluent.io>
> > > >> > > Date: 6/20/18  10:45 AM  (GMT-08:00) To: dev@kafka.apache.org
> > > >> Subject:
> > > >> > > [DISCUSS] KIP-319: Replace segments with segmentSize in
> > > >> > > WindowBytesStoreSupplier
> > > >> > > Hello All,
> > > >> > >
> > > >> > > I'd like to propose KIP-319 to fix an issue I identified in
> > > >> KAFKA-7080.
> > > >> > > Specifically, we're creating CachingWindowStore with the *number
> > of
> > > >> > > segments* instead of the *segment size*.
> > > >> > >
> > > >> > > Here's the jira:
> https://issues.apache.org/jira/browse/KAFKA-7080
> > > >> > > Here's the KIP: https://cwiki.apache.org/confluence/x/mQU0BQ
> > > >> > >
> > > >> > > additionally, here's a draft PR for clarity:
> > > >> > > https://github.com/apache/kafka/pull/5257
> > > >> > >
> > > >> > > Please let me know what you think!
> > > >> > >
> > > >> > > Thanks,
> > > >> > > -John
> > > >> > >
> > > >> >
> > > >>
> > > >>
> > > >>
> > > >> --
> > > >> -- Guozhang
> > > >>
> > > >
> > >
> >
>
>
>
> --
> -- Guozhang
>


Re: [VOTE] 0.11.0.3 RC0

2018-06-22 Thread Vahid S Hashemian
+1 (non-binding)

Built from source and ran quickstart successfully on Ubuntu (with Java 8).

Thanks Matthias!
--Vahid




From:   "Matthias J. Sax" 
To: dev@kafka.apache.org, us...@kafka.apache.org, 
kafka-clie...@googlegroups.com
Date:   06/22/2018 03:14 PM
Subject:[VOTE] 0.11.0.3 RC0



Hello Kafka users, developers and client-developers,

This is the first candidate for release of Apache Kafka 0.11.0.3.

This is a bug fix release closing 27 tickets:
https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+0.11.0.3

Release notes for the 0.11.0.3 release:
http://home.apache.org/~mjsax/kafka-0.11.0.3-rc0/RELEASE_NOTES.html

*** Please download, test and vote by Tuesday, 6/26/18 end-of-day, so we
can close the vote on Wednesday.

Kafka's KEYS file containing PGP keys we use to sign the release:
http://kafka.apache.org/KEYS

* Release artifacts to be voted upon (source and binary):
http://home.apache.org/~mjsax/kafka-0.11.0.3-rc0/

* Maven artifacts to be voted upon:
https://repository.apache.org/content/groups/staging/

* Javadoc:
http://home.apache.org/~mjsax/kafka-0.11.0.3-rc0/javadoc/

* Tag to be voted upon (off 0.11.0 branch) is the 0.11.0.3 tag:
https://github.com/apache/kafka/releases/tag/0.11.0.3-rc0

* Documentation:
http://kafka.apache.org/0110/documentation.html

* Protocol:
http://kafka.apache.org/0110/protocol.html

* Successful Jenkins builds for the 0.11.0 branch:
Unit/integration tests: 
https://builds.apache.org/job/kafka-0.11.0-jdk7/385/
System tests:
https://jenkins.confluent.io/job/system-test-kafka/job/0.11.0/217/

/**

Thanks,
  -Matthias

[attachment "signature.asc" deleted by Vahid S Hashemian/Silicon 
Valley/IBM] 





Re: [VOTE] 2.0.0 RC0

2018-06-22 Thread Vahid S Hashemian
+1 (non-binding)

Built from source and ran quickstart successfully on Ubuntu (with Java 8 
and Java 9).

Thanks Rajini!
--Vahid



Re: [DISCUSS] KIP-319: Replace segments with segmentSize in WindowBytesStoreSupplier

2018-06-22 Thread Guozhang Wang
Thanks for the KIP. I'm +1 on the proposal. One minor comment on the wiki:
the `In Windows, we will:` section code snippet is empty.

On Fri, Jun 22, 2018 at 3:29 PM, Bill Bejeck  wrote:

> Hi John,
>
> Thanks for the KIP, and overall it's a +1 for me.
>
> In the JavaDoc for the segmentInterval method, there's no mention of the
> number of segments there can be at any one time.  While it's implied that
> the number of segments is potentially unbounded, would be better to
> explicitly state that the previous limit on the number of segments is going
> to be removed as well?
>
> I have a couple of nit comments.   The method name is still segmentSize in
> the code block vs segmentInterval and the order of the parameters for the
> third persistentWindowStore don't match the order in the JavaDoc.
>
> Thanks,
> Bill
>
>
>
> On Thu, Jun 21, 2018 at 3:32 PM John Roesler  wrote:
>
> > I've updated the KIP and draft PR accordingly.
> >
> > On Thu, Jun 21, 2018 at 2:03 PM John Roesler  wrote:
> >
> > > Interesting... I did not initially consider it because I didn't want to
> > > have an impact on anyone's Streams apps, but now I see that unless
> > > developers have subclassed `Windows`, the number of segments would
> always
> > > be 3!
> > >
> > > There's one caveat to this, which I think was a mistake. The field
> > > `segments` in Windows is public, which means that anyone can actually
> set
> > > it directly on any Window instance like:
> > >
> > > TimeWindows tw = TimeWindows.of(100);
> > > tw.segments = 12345;
> > >
> > > Bypassing the bounds check and contradicting the javadoc in Windows
> that
> > > says users can't directly set it. Sadly there's no way to "deprecate"
> > this
> > > exposure, so I propose just to make it private.
> > >
> > > With this new knowledge, I agree, I think we can switch to
> > > "segmentInterval" throughout the interface.
> > >
> > > On Wed, Jun 20, 2018 at 5:06 PM Guozhang Wang 
> > wrote:
> > >
> > >> Hello John,
> > >>
> > >> Thanks for the KIP.
> > >>
> > >> Should we consider making the change on `Stores#persistentWindowStore`
> > >> parameters as well?
> > >>
> > >>
> > >> Guozhang
> > >>
> > >>
> > >> On Wed, Jun 20, 2018 at 1:31 PM, John Roesler 
> > wrote:
> > >>
> > >> > Hi Ted,
> > >> >
> > >> > Ah, when you made that comment to me before, I thought you meant as
> > >> opposed
> > >> > to "segments". Now it makes sense that you meant as opposed to
> > >> > "segmentSize".
> > >> >
> > >> > I named it that way to match the peer method "windowSize", which is
> > >> also a
> > >> > quantity of milliseconds.
> > >> >
> > >> > I agree that "interval" is more intuitive, but I think I favor
> > >> consistency
> > >> > in this case. Does that seem reasonable?
> > >> >
> > >> > Thanks,
> > >> > -John
> > >> >
> > >> > On Wed, Jun 20, 2018 at 1:06 PM Ted Yu  wrote:
> > >> >
> > >> > > Normally size is not measured in time unit, such as milliseconds.
> > >> > > How about naming the new method segmentInterval ?
> > >> > > Thanks
> > >> > >  Original message From: John Roesler <
> > >> j...@confluent.io>
> > >> > > Date: 6/20/18  10:45 AM  (GMT-08:00) To: dev@kafka.apache.org
> > >> Subject:
> > >> > > [DISCUSS] KIP-319: Replace segments with segmentSize in
> > >> > > WindowBytesStoreSupplier
> > >> > > Hello All,
> > >> > >
> > >> > > I'd like to propose KIP-319 to fix an issue I identified in
> > >> KAFKA-7080.
> > >> > > Specifically, we're creating CachingWindowStore with the *number
> of
> > >> > > segments* instead of the *segment size*.
> > >> > >
> > >> > > Here's the jira: https://issues.apache.org/jira/browse/KAFKA-7080
> > >> > > Here's the KIP: https://cwiki.apache.org/confluence/x/mQU0BQ
> > >> > >
> > >> > > additionally, here's a draft PR for clarity:
> > >> > > https://github.com/apache/kafka/pull/5257
> > >> > >
> > >> > > Please let me know what you think!
> > >> > >
> > >> > > Thanks,
> > >> > > -John
> > >> > >
> > >> >
> > >>
> > >>
> > >>
> > >> --
> > >> -- Guozhang
> > >>
> > >
> >
>



-- 
-- Guozhang


Re: [DISCUSS] KIP-319: Replace segments with segmentSize in WindowBytesStoreSupplier

2018-06-22 Thread Bill Bejeck
Hi John,

Thanks for the KIP, and overall it's a +1 for me.

In the JavaDoc for the segmentInterval method, there's no mention of the
number of segments there can be at any one time.  While it's implied that
the number of segments is potentially unbounded, would be better to
explicitly state that the previous limit on the number of segments is going
to be removed as well?

I have a couple of nit comments.   The method name is still segmentSize in
the code block vs segmentInterval and the order of the parameters for the
third persistentWindowStore don't match the order in the JavaDoc.

Thanks,
Bill



On Thu, Jun 21, 2018 at 3:32 PM John Roesler  wrote:

> I've updated the KIP and draft PR accordingly.
>
> On Thu, Jun 21, 2018 at 2:03 PM John Roesler  wrote:
>
> > Interesting... I did not initially consider it because I didn't want to
> > have an impact on anyone's Streams apps, but now I see that unless
> > developers have subclassed `Windows`, the number of segments would always
> > be 3!
> >
> > There's one caveat to this, which I think was a mistake. The field
> > `segments` in Windows is public, which means that anyone can actually set
> > it directly on any Window instance like:
> >
> > TimeWindows tw = TimeWindows.of(100);
> > tw.segments = 12345;
> >
> > Bypassing the bounds check and contradicting the javadoc in Windows that
> > says users can't directly set it. Sadly there's no way to "deprecate"
> this
> > exposure, so I propose just to make it private.
> >
> > With this new knowledge, I agree, I think we can switch to
> > "segmentInterval" throughout the interface.
> >
> > On Wed, Jun 20, 2018 at 5:06 PM Guozhang Wang 
> wrote:
> >
> >> Hello John,
> >>
> >> Thanks for the KIP.
> >>
> >> Should we consider making the change on `Stores#persistentWindowStore`
> >> parameters as well?
> >>
> >>
> >> Guozhang
> >>
> >>
> >> On Wed, Jun 20, 2018 at 1:31 PM, John Roesler 
> wrote:
> >>
> >> > Hi Ted,
> >> >
> >> > Ah, when you made that comment to me before, I thought you meant as
> >> opposed
> >> > to "segments". Now it makes sense that you meant as opposed to
> >> > "segmentSize".
> >> >
> >> > I named it that way to match the peer method "windowSize", which is
> >> also a
> >> > quantity of milliseconds.
> >> >
> >> > I agree that "interval" is more intuitive, but I think I favor
> >> consistency
> >> > in this case. Does that seem reasonable?
> >> >
> >> > Thanks,
> >> > -John
> >> >
> >> > On Wed, Jun 20, 2018 at 1:06 PM Ted Yu  wrote:
> >> >
> >> > > Normally size is not measured in time unit, such as milliseconds.
> >> > > How about naming the new method segmentInterval ?
> >> > > Thanks
> >> > >  Original message From: John Roesler <
> >> j...@confluent.io>
> >> > > Date: 6/20/18  10:45 AM  (GMT-08:00) To: dev@kafka.apache.org
> >> Subject:
> >> > > [DISCUSS] KIP-319: Replace segments with segmentSize in
> >> > > WindowBytesStoreSupplier
> >> > > Hello All,
> >> > >
> >> > > I'd like to propose KIP-319 to fix an issue I identified in
> >> KAFKA-7080.
> >> > > Specifically, we're creating CachingWindowStore with the *number of
> >> > > segments* instead of the *segment size*.
> >> > >
> >> > > Here's the jira: https://issues.apache.org/jira/browse/KAFKA-7080
> >> > > Here's the KIP: https://cwiki.apache.org/confluence/x/mQU0BQ
> >> > >
> >> > > additionally, here's a draft PR for clarity:
> >> > > https://github.com/apache/kafka/pull/5257
> >> > >
> >> > > Please let me know what you think!
> >> > >
> >> > > Thanks,
> >> > > -John
> >> > >
> >> >
> >>
> >>
> >>
> >> --
> >> -- Guozhang
> >>
> >
>


[VOTE] 0.11.0.3 RC0

2018-06-22 Thread Matthias J. Sax
Hello Kafka users, developers and client-developers,

This is the first candidate for release of Apache Kafka 0.11.0.3.

This is a bug fix release closing 27 tickets:
https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+0.11.0.3

Release notes for the 0.11.0.3 release:
http://home.apache.org/~mjsax/kafka-0.11.0.3-rc0/RELEASE_NOTES.html

*** Please download, test and vote by Tuesday, 6/26/18 end-of-day, so we
can close the vote on Wednesday.

Kafka's KEYS file containing PGP keys we use to sign the release:
http://kafka.apache.org/KEYS

* Release artifacts to be voted upon (source and binary):
http://home.apache.org/~mjsax/kafka-0.11.0.3-rc0/

* Maven artifacts to be voted upon:
https://repository.apache.org/content/groups/staging/

* Javadoc:
http://home.apache.org/~mjsax/kafka-0.11.0.3-rc0/javadoc/

* Tag to be voted upon (off 0.11.0 branch) is the 0.11.0.3 tag:
https://github.com/apache/kafka/releases/tag/0.11.0.3-rc0

* Documentation:
http://kafka.apache.org/0110/documentation.html

* Protocol:
http://kafka.apache.org/0110/protocol.html

* Successful Jenkins builds for the 0.11.0 branch:
Unit/integration tests: https://builds.apache.org/job/kafka-0.11.0-jdk7/385/
System tests:
https://jenkins.confluent.io/job/system-test-kafka/job/0.11.0/217/

/**

Thanks,
  -Matthias



signature.asc
Description: OpenPGP digital signature


Re: [VOTE] 1.0.2 RC0

2018-06-22 Thread Ted Yu
+1

Ran test suite.

Checked signatures.

On Fri, Jun 22, 2018 at 11:42 AM, Vahid S Hashemian <
vahidhashem...@us.ibm.com> wrote:

> +1 (non-binding)
>
> Built from source and ran quickstart successfully on Ubuntu (with Java 8).
>
> Thanks for running the release Matthias!
> --Vahid
>
>
>
>
> From:   "Matthias J. Sax" 
> To: dev@kafka.apache.org, us...@kafka.apache.org,
> kafka-clie...@googlegroups.com
> Date:   06/22/2018 10:42 AM
> Subject:[VOTE] 1.0.2 RC0
>
>
>
> Hello Kafka users, developers and client-developers,
>
> This is the first candidate for release of Apache Kafka 1.0.2.
>
> This is a bug fix release closing 26 tickets:
> https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+1.0.2
>
> Release notes for the 1.0.2 release:
> http://home.apache.org/~mjsax/kafka-1.0.2-rc0/RELEASE_NOTES.html
>
> *** Please download, test and vote by Tuesday, 6/26/18 end-of-day, so we
> can close the vote on Wednesday.
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> http://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> http://home.apache.org/~mjsax/kafka-1.0.2-rc0/
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/
>
> * Javadoc:
> http://home.apache.org/~mjsax/kafka-1.0.2-rc0/javadoc/
>
> * Tag to be voted upon (off 1.0 branch) is the 1.0.2 tag:
> https://github.com/apache/kafka/releases/tag/1.0.2-rc0
>
> * Documentation:
> http://kafka.apache.org/10/documentation.html
>
> * Protocol:
> http://kafka.apache.org/10/protocol.html
>
> * Successful Jenkins builds for the 1.0 branch:
> Unit/integration tests: https://builds.apache.org/job/kafka-1.0-jdk7/211/
> System tests:
> https://jenkins.confluent.io/job/system-test-kafka/job/1.0/217/
>
> /**
>
> Thanks,
>   -Matthias
>
> [attachment "signature.asc" deleted by Vahid S Hashemian/Silicon
> Valley/IBM]
>
>
>
>


Re: [VOTE] 1.1.1 RC1

2018-06-22 Thread Vahid S Hashemian
+1 (non-binding)

Built from source and ran quickstart successfully on Ubuntu (with Java 8).

Thanks Dong!
--Vahid



From:   Dong Lin 
To: dev@kafka.apache.org, us...@kafka.apache.org, 
kafka-clie...@googlegroups.com
Date:   06/22/2018 10:10 AM
Subject:[VOTE] 1.1.1 RC1



Hello Kafka users, developers and client-developers,

This is the second candidate for release of Apache Kafka 1.1.1.

Apache Kafka 1.1.1 is a bug-fix release for the 1.1 branch that was first
released with 1.1.0 about 3 months ago. We have fixed about 25 issues 
since
that release. A few of the more significant fixes include:

KAFKA-6925 <
https://issues.apache.org/jira/browse/KAFKA-6925
> - Fix memory
leak in StreamsMetricsThreadImpl
KAFKA-6937 <
https://issues.apache.org/jira/browse/KAFKA-6937
> - In-sync
replica delayed during fetch if replica throttle is exceeded
KAFKA-6917 <
https://issues.apache.org/jira/browse/KAFKA-6917
> - Process txn
completion asynchronously to avoid deadlock
KAFKA-6893 <
https://issues.apache.org/jira/browse/KAFKA-6893
> - Create
processors before starting acceptor to avoid ArithmeticException
KAFKA-6870 <
https://issues.apache.org/jira/browse/KAFKA-6870
> -
Fix ConcurrentModificationException in SampledStat
KAFKA-6878 <
https://issues.apache.org/jira/browse/KAFKA-6878
> - Fix
NullPointerException when querying global state store
KAFKA-6879 <
https://issues.apache.org/jira/browse/KAFKA-6879
> - Invoke
session init callbacks outside lock to avoid Controller deadlock
KAFKA-6857 <
https://issues.apache.org/jira/browse/KAFKA-6857
> - Prevent
follower from truncating to the wrong offset if undefined leader epoch is
requested
KAFKA-6854 <
https://issues.apache.org/jira/browse/KAFKA-6854
> - Log cleaner
fails with transaction markers that are deleted during clean
KAFKA-6747 <
https://issues.apache.org/jira/browse/KAFKA-6747
> - Check
whether there is in-flight transaction before aborting transaction
KAFKA-6748 <
https://issues.apache.org/jira/browse/KAFKA-6748
> - Double
check before scheduling a new task after the punctuate call
KAFKA-6739 <
https://issues.apache.org/jira/browse/KAFKA-6739
> -
Fix IllegalArgumentException when down-converting from V2 to V0/V1
KAFKA-6728 <
https://issues.apache.org/jira/browse/KAFKA-6728
> -
Fix NullPointerException when instantiating the HeaderConverter

Kafka 1.1.1 release plan:
https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+1.1.1


Release notes for the 1.1.1 release:
http://home.apache.org/~lindong/kafka-1.1.1-rc1/RELEASE_NOTES.html


*** Please download, test and vote by Thursday, Jun 22, 12pm PT ***

Kafka's KEYS file containing PGP keys we use to sign the release:
http://kafka.apache.org/KEYS


* Release artifacts to be voted upon (source and binary):
http://home.apache.org/~lindong/kafka-1.1.1-rc1/


* Maven artifacts to be voted upon:
https://repository.apache.org/content/groups/staging/


* Javadoc:
http://home.apache.org/~lindong/kafka-1.1.1-rc1/javadoc/


* Tag to be voted upon (off 1.1 branch) is the 1.1.1-rc1 tag:
https://github.com/apache/kafka/tree/1.1.1-rc1


* Documentation:
http://kafka.apache.org/11/documentation.html


* Protocol:
http://kafka.apache.org/11/protocol.html


* Successful Jenkins builds for the 1.1 branch:
Unit/integration tests: 
*https://builds.apache.org/job/kafka-1.1-jdk7/152/
<
https://builds.apache.org/job/kafka-1.1-jdk7/152/
>*
System tests: 
https://jenkins.confluent.io/job/system-test-

kafka-branch-builder/1817


Please test and verify the release artifacts and submit a vote for this 
RC,
or report any issues so we can fix them and get a new RC out ASAP. 
Although
this release vote requires PMC votes to pass, testing, votes, and bug
reports are valuable and appreciated from everyone.

Cheers,
Dong






Build failed in Jenkins: kafka-2.0-jdk8 #53

2018-06-22 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] MINOR: Cleanup threads in integration tests (#5269)

--
[...truncated 2.61 MB...]

org.apache.kafka.connect.runtime.WorkerConfigTransformerTest > 
testReplaceVariable PASSED

org.apache.kafka.connect.runtime.WorkerConfigTransformerTest > 
testReplaceVariableWithTTL STARTED

org.apache.kafka.connect.runtime.WorkerConfigTransformerTest > 
testReplaceVariableWithTTL PASSED

org.apache.kafka.connect.runtime.WorkerConfigTransformerTest > 
testReplaceVariableWithTTLFirstCancelThenScheduleRestart STARTED

org.apache.kafka.connect.runtime.WorkerConfigTransformerTest > 
testReplaceVariableWithTTLFirstCancelThenScheduleRestart PASSED

org.apache.kafka.connect.runtime.WorkerConfigTransformerTest > 
testTransformNullConfiguration STARTED

org.apache.kafka.connect.runtime.WorkerConfigTransformerTest > 
testTransformNullConfiguration PASSED

org.apache.kafka.connect.runtime.WorkerConfigTransformerTest > 
testReplaceVariableWithTTLAndScheduleRestart STARTED

org.apache.kafka.connect.runtime.WorkerConfigTransformerTest > 
testReplaceVariableWithTTLAndScheduleRestart PASSED

org.apache.kafka.connect.runtime.WorkerTaskTest > standardStartup STARTED

org.apache.kafka.connect.runtime.WorkerTaskTest > standardStartup PASSED

org.apache.kafka.connect.runtime.WorkerTaskTest > stopBeforeStarting STARTED

org.apache.kafka.connect.runtime.WorkerTaskTest > stopBeforeStarting PASSED

org.apache.kafka.connect.runtime.WorkerTaskTest > cancelBeforeStopping STARTED

org.apache.kafka.connect.runtime.WorkerTaskTest > cancelBeforeStopping PASSED

org.apache.kafka.connect.runtime.WorkerTaskTest > 
updateMetricsOnListenerEventsForStartupPauseResumeAndShutdown STARTED

org.apache.kafka.connect.runtime.WorkerTaskTest > 
updateMetricsOnListenerEventsForStartupPauseResumeAndShutdown PASSED

org.apache.kafka.connect.runtime.WorkerTaskTest > 
updateMetricsOnListenerEventsForStartupPauseResumeAndFailure STARTED

org.apache.kafka.connect.runtime.WorkerTaskTest > 
updateMetricsOnListenerEventsForStartupPauseResumeAndFailure PASSED

org.apache.kafka.connect.runtime.ErrorHandlingTaskTest > 
testErrorHandlingInSinkTasks STARTED

org.apache.kafka.connect.runtime.ErrorHandlingTaskTest > 
testErrorHandlingInSinkTasks PASSED

org.apache.kafka.connect.runtime.ErrorHandlingTaskTest > 
testErrorHandlingInSourceTasks STARTED

org.apache.kafka.connect.runtime.ErrorHandlingTaskTest > 
testErrorHandlingInSourceTasks PASSED

org.apache.kafka.connect.runtime.ErrorHandlingTaskTest > 
testErrorHandlingInSourceTasksWthBadConverter STARTED

org.apache.kafka.connect.runtime.ErrorHandlingTaskTest > 
testErrorHandlingInSourceTasksWthBadConverter PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnectorFailedBasicValidation STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnectorFailedBasicValidation PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnectorFailedCustomValidation STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnectorFailedCustomValidation PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnectorAlreadyExists STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnectorAlreadyExists PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testPutConnectorConfig STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnector STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnector PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartConnector STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartConnector PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartTask STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartTask PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testAccessors STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testAccessors PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartTaskRedirectToOwner STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartTaskRedirectToOwner PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorConfigAdded STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorConfigAdded PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorConfigUpdate STARTED


Re: [VOTE] 1.0.2 RC0

2018-06-22 Thread Vahid S Hashemian
+1 (non-binding)

Built from source and ran quickstart successfully on Ubuntu (with Java 8).

Thanks for running the release Matthias!
--Vahid




From:   "Matthias J. Sax" 
To: dev@kafka.apache.org, us...@kafka.apache.org, 
kafka-clie...@googlegroups.com
Date:   06/22/2018 10:42 AM
Subject:[VOTE] 1.0.2 RC0



Hello Kafka users, developers and client-developers,

This is the first candidate for release of Apache Kafka 1.0.2.

This is a bug fix release closing 26 tickets:
https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+1.0.2

Release notes for the 1.0.2 release:
http://home.apache.org/~mjsax/kafka-1.0.2-rc0/RELEASE_NOTES.html

*** Please download, test and vote by Tuesday, 6/26/18 end-of-day, so we
can close the vote on Wednesday.

Kafka's KEYS file containing PGP keys we use to sign the release:
http://kafka.apache.org/KEYS

* Release artifacts to be voted upon (source and binary):
http://home.apache.org/~mjsax/kafka-1.0.2-rc0/

* Maven artifacts to be voted upon:
https://repository.apache.org/content/groups/staging/

* Javadoc:
http://home.apache.org/~mjsax/kafka-1.0.2-rc0/javadoc/

* Tag to be voted upon (off 1.0 branch) is the 1.0.2 tag:
https://github.com/apache/kafka/releases/tag/1.0.2-rc0

* Documentation:
http://kafka.apache.org/10/documentation.html

* Protocol:
http://kafka.apache.org/10/protocol.html

* Successful Jenkins builds for the 1.0 branch:
Unit/integration tests: https://builds.apache.org/job/kafka-1.0-jdk7/211/
System tests:
https://jenkins.confluent.io/job/system-test-kafka/job/1.0/217/

/**

Thanks,
  -Matthias

[attachment "signature.asc" deleted by Vahid S Hashemian/Silicon 
Valley/IBM] 





[jira] [Created] (KAFKA-7091) AdminClient should handle FindCoordinatorResponse errors

2018-06-22 Thread Manikumar (JIRA)
Manikumar created KAFKA-7091:


 Summary: AdminClient should handle FindCoordinatorResponse errors
 Key: KAFKA-7091
 URL: https://issues.apache.org/jira/browse/KAFKA-7091
 Project: Kafka
  Issue Type: Improvement
Reporter: Manikumar
Assignee: Manikumar
 Fix For: 2.0.0


Currently KafkaAdminClient.deleteConsumerGroups, listConsumerGroupOffsets 
methods are ignoring FindCoordinatorResponse errors. We should handle these 
errors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-291: Have separate queues for control requests and data requests

2018-06-22 Thread Lucas Wang
Hi Eno,

Sorry for the delayed response.
- I haven't implemented the feature yet, so no experimental results so far.
And I plan to test in out in the following days.

- You are absolutely right that the priority queue does not completely
prevent
data requests being processed ahead of controller requests.
That being said, I expect it to greatly mitigate the effect of stable
metadata.
In any case, I'll try it out and post the results when I have it.

Regards,
Lucas

On Wed, Jun 20, 2018 at 5:44 AM, Eno Thereska 
wrote:

> Hi Lucas,
>
> Sorry for the delay, just had a look at this. A couple of questions:
> - did you notice any positive change after implementing this KIP? I'm
> wondering if you have any experimental results that show the benefit of the
> two queues.
>
> - priority is usually not sufficient in addressing the problem the KIP
> identifies. Even with priority queues, you will sometimes (often?) have the
> case that data plane requests will be ahead of the control plane requests.
> This happens because the system might have already started processing the
> data plane requests before the control plane ones arrived. So it would be
> good to know what % of the problem this KIP addresses.
>
> Thanks
> Eno
>
> On Fri, Jun 15, 2018 at 4:44 PM, Ted Yu  wrote:
>
> > Change looks good.
> >
> > Thanks
> >
> > On Fri, Jun 15, 2018 at 8:42 AM, Lucas Wang 
> wrote:
> >
> > > Hi Ted,
> > >
> > > Thanks for the suggestion. I've updated the KIP. Please take another
> > look.
> > >
> > > Lucas
> > >
> > > On Thu, Jun 14, 2018 at 6:34 PM, Ted Yu  wrote:
> > >
> > > > Currently in KafkaConfig.scala :
> > > >
> > > >   val QueuedMaxRequests = 500
> > > >
> > > > It would be good if you can include the default value for this new
> > config
> > > > in the KIP.
> > > >
> > > > Thanks
> > > >
> > > > On Thu, Jun 14, 2018 at 4:28 PM, Lucas Wang 
> > > wrote:
> > > >
> > > > > Hi Ted, Dong
> > > > >
> > > > > I've updated the KIP by adding a new config, instead of reusing the
> > > > > existing one.
> > > > > Please take another look when you have time. Thanks a lot!
> > > > >
> > > > > Lucas
> > > > >
> > > > > On Thu, Jun 14, 2018 at 2:33 PM, Ted Yu 
> wrote:
> > > > >
> > > > > > bq.  that's a waste of resource if control request rate is low
> > > > > >
> > > > > > I don't know if control request rate can get to 100,000, likely
> > not.
> > > > Then
> > > > > > using the same bound as that for data requests seems high.
> > > > > >
> > > > > > On Wed, Jun 13, 2018 at 10:13 PM, Lucas Wang <
> > lucasatu...@gmail.com>
> > > > > > wrote:
> > > > > >
> > > > > > > Hi Ted,
> > > > > > >
> > > > > > > Thanks for taking a look at this KIP.
> > > > > > > Let's say today the setting of "queued.max.requests" in
> cluster A
> > > is
> > > > > > 1000,
> > > > > > > while the setting in cluster B is 100,000.
> > > > > > > The 100 times difference might have indicated that machines in
> > > > cluster
> > > > > B
> > > > > > > have larger memory.
> > > > > > >
> > > > > > > By reusing the "queued.max.requests", the controlRequestQueue
> in
> > > > > cluster
> > > > > > B
> > > > > > > automatically
> > > > > > > gets a 100x capacity without explicitly bothering the
> operators.
> > > > > > > I understand the counter argument can be that maybe that's a
> > waste
> > > of
> > > > > > > resource if control request
> > > > > > > rate is low and operators may want to fine tune the capacity of
> > the
> > > > > > > controlRequestQueue.
> > > > > > >
> > > > > > > I'm ok with either approach, and can change it if you or anyone
> > > else
> > > > > > feels
> > > > > > > strong about adding the extra config.
> > > > > > >
> > > > > > > Thanks,
> > > > > > > Lucas
> > > > > > >
> > > > > > >
> > > > > > > On Wed, Jun 13, 2018 at 3:11 PM, Ted Yu 
> > > wrote:
> > > > > > >
> > > > > > > > Lucas:
> > > > > > > > Under Rejected Alternatives, #2, can you elaborate a bit more
> > on
> > > > why
> > > > > > the
> > > > > > > > separate config has bigger impact ?
> > > > > > > >
> > > > > > > > Thanks
> > > > > > > >
> > > > > > > > On Wed, Jun 13, 2018 at 2:00 PM, Dong Lin <
> lindon...@gmail.com
> > >
> > > > > wrote:
> > > > > > > >
> > > > > > > > > Hey Luca,
> > > > > > > > >
> > > > > > > > > Thanks for the KIP. Looks good overall. Some comments
> below:
> > > > > > > > >
> > > > > > > > > - We usually specify the full mbean for the new metrics in
> > the
> > > > KIP.
> > > > > > Can
> > > > > > > > you
> > > > > > > > > specify it in the Public Interface section similar to
> KIP-237
> > > > > > > > >  > > > > > > > > 237%3A+More+Controller+Health+Metrics>
> > > > > > > > > ?
> > > > > > > > >
> > > > > > > > > - Maybe we could follow the same pattern as KIP-153
> > > > > > > > >  > > > > > > > > 153%3A+Include+only+client+traffic+in+BytesOutPerSec+
> > metric>,
> > > > > > > > > where we keep the existing sensor name "BytesInPerSec" 

[VOTE] 1.0.2 RC0

2018-06-22 Thread Matthias J. Sax
Hello Kafka users, developers and client-developers,

This is the first candidate for release of Apache Kafka 1.0.2.

This is a bug fix release closing 26 tickets:
https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+1.0.2

Release notes for the 1.0.2 release:
http://home.apache.org/~mjsax/kafka-1.0.2-rc0/RELEASE_NOTES.html

*** Please download, test and vote by Tuesday, 6/26/18 end-of-day, so we
can close the vote on Wednesday.

Kafka's KEYS file containing PGP keys we use to sign the release:
http://kafka.apache.org/KEYS

* Release artifacts to be voted upon (source and binary):
http://home.apache.org/~mjsax/kafka-1.0.2-rc0/

* Maven artifacts to be voted upon:
https://repository.apache.org/content/groups/staging/

* Javadoc:
http://home.apache.org/~mjsax/kafka-1.0.2-rc0/javadoc/

* Tag to be voted upon (off 1.0 branch) is the 1.0.2 tag:
https://github.com/apache/kafka/releases/tag/1.0.2-rc0

* Documentation:
http://kafka.apache.org/10/documentation.html

* Protocol:
http://kafka.apache.org/10/protocol.html

* Successful Jenkins builds for the 1.0 branch:
Unit/integration tests: https://builds.apache.org/job/kafka-1.0-jdk7/211/
System tests:
https://jenkins.confluent.io/job/system-test-kafka/job/1.0/217/

/**

Thanks,
  -Matthias



signature.asc
Description: OpenPGP digital signature


Jenkins build is back to normal : kafka-trunk-jdk10 #245

2018-06-22 Thread Apache Jenkins Server
See 




Re: [VOTE] KIP-313: Add KStream.flatTransform and KStream.flatTransformValues

2018-06-22 Thread Bill Bejeck
Thanks for the KIP, +1.

-Bill

On Fri, Jun 22, 2018 at 1:08 PM Ted Yu  wrote:

> +1
>
> On Fri, Jun 22, 2018 at 2:50 AM, Bruno Cadonna  wrote:
>
> > Hi list,
> >
> > I would like to voting on this KIP.
> >
> > I created a first PR[1] that adds flatTransform. Once I get some
> > feedback, I will start work on flatTransformValues.
> >
> > Best regards,
> > Bruno
> >
> > [1] https://github.com/apache/kafka/pull/5273
> >
>


[VOTE] 1.1.1 RC1

2018-06-22 Thread Dong Lin
Hello Kafka users, developers and client-developers,

This is the second candidate for release of Apache Kafka 1.1.1.

Apache Kafka 1.1.1 is a bug-fix release for the 1.1 branch that was first
released with 1.1.0 about 3 months ago. We have fixed about 25 issues since
that release. A few of the more significant fixes include:

KAFKA-6925  - Fix memory
leak in StreamsMetricsThreadImpl
KAFKA-6937  - In-sync
replica delayed during fetch if replica throttle is exceeded
KAFKA-6917  - Process txn
completion asynchronously to avoid deadlock
KAFKA-6893  - Create
processors before starting acceptor to avoid ArithmeticException
KAFKA-6870  -
Fix ConcurrentModificationException in SampledStat
KAFKA-6878  - Fix
NullPointerException when querying global state store
KAFKA-6879  - Invoke
session init callbacks outside lock to avoid Controller deadlock
KAFKA-6857  - Prevent
follower from truncating to the wrong offset if undefined leader epoch is
requested
KAFKA-6854  - Log cleaner
fails with transaction markers that are deleted during clean
KAFKA-6747  - Check
whether there is in-flight transaction before aborting transaction
KAFKA-6748  - Double
check before scheduling a new task after the punctuate call
KAFKA-6739  -
Fix IllegalArgumentException when down-converting from V2 to V0/V1
KAFKA-6728  -
Fix NullPointerException when instantiating the HeaderConverter

Kafka 1.1.1 release plan:
https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+1.1.1

Release notes for the 1.1.1 release:
http://home.apache.org/~lindong/kafka-1.1.1-rc1/RELEASE_NOTES.html

*** Please download, test and vote by Thursday, Jun 22, 12pm PT ***

Kafka's KEYS file containing PGP keys we use to sign the release:
http://kafka.apache.org/KEYS

* Release artifacts to be voted upon (source and binary):
http://home.apache.org/~lindong/kafka-1.1.1-rc1/

* Maven artifacts to be voted upon:
https://repository.apache.org/content/groups/staging/

* Javadoc:
http://home.apache.org/~lindong/kafka-1.1.1-rc1/javadoc/

* Tag to be voted upon (off 1.1 branch) is the 1.1.1-rc1 tag:
https://github.com/apache/kafka/tree/1.1.1-rc1

* Documentation:
http://kafka.apache.org/11/documentation.html

* Protocol:
http://kafka.apache.org/11/protocol.html

* Successful Jenkins builds for the 1.1 branch:
Unit/integration tests: *https://builds.apache.org/job/kafka-1.1-jdk7/152/
*
System tests: https://jenkins.confluent.io/job/system-test-
kafka-branch-builder/1817


Please test and verify the release artifacts and submit a vote for this RC,
or report any issues so we can fix them and get a new RC out ASAP. Although
this release vote requires PMC votes to pass, testing, votes, and bug
reports are valuable and appreciated from everyone.

Cheers,
Dong


Re: [VOTE] KIP-313: Add KStream.flatTransform and KStream.flatTransformValues

2018-06-22 Thread Ted Yu
+1

On Fri, Jun 22, 2018 at 2:50 AM, Bruno Cadonna  wrote:

> Hi list,
>
> I would like to voting on this KIP.
>
> I created a first PR[1] that adds flatTransform. Once I get some
> feedback, I will start work on flatTransformValues.
>
> Best regards,
> Bruno
>
> [1] https://github.com/apache/kafka/pull/5273
>


Re: [DISCUSS]KIP-216: IQ should throw different exceptions for different errors

2018-06-22 Thread vito jeng
Matthias,

Thank you for your assistance.

> what is the status of this KIP?

Unfortunately, there is no further progress.
About seven weeks ago, I was injured in sports. I had a broken wrist on
my left wrist.
Many jobs are affected, including this KIP and implementation.


> I just re-read it, and have a couple of follow up comments. Why do we
> discuss the internal exceptions you want to add? Also, do we really need
> them? Can't we just throw the correct exception directly instead of
> wrapping it later?

I think you may be right. As I say in the previous:
"The original idea is that we can distinguish different state store
exception for different handling. But to be honest, I am not quite sure
this is necessary. Maybe have some change during implementation."

During the implementation, I also feel we maybe not need wrapper it.
We can just throw the correctly directly.


> Is `StreamThreadNotRunningException` really an retryable error?

When KafkaStream state is REBALANCING, I think it is a retryable error.

StreamThreadStateStoreProvider#stores() will throw
StreamThreadNotRunningException when StreamThread state is not RUNNING. The
user can retry until KafkaStream state is RUNNING.


> When would we throw an `StateStoreEmptyException`? The semantics is
unclear to me atm.

> When the state is RUNNING, is `StateStoreClosedException` a retryable
error?

These two comments will be answered in another mail.



---
Vito

On Mon, Jun 11, 2018 at 8:12 AM, Matthias J. Sax 
wrote:

> Vito,
>
> what is the status of this KIP?
>
> I just re-read it, and have a couple of follow up comments. Why do we
> discuss the internal exceptions you want to add? Also, do we really need
> them? Can't we just throw the correct exception directly instead of
> wrapping it later?
>
> When would we throw an `StateStoreEmptyException`? The semantics is
> unclear to me atm.
>
> Is `StreamThreadNotRunningException` really an retryable error?
>
> When the state is RUNNING, is `StateStoreClosedException` a retryable
> error?
>
> One more nits: ReadOnlyWindowStore got a new method #fetch(K key, long
> time); that should be added
>
>
> Overall I like the KIP but some details are still unclear. Maybe it
> might help if you open an PR in parallel?
>
>
> -Matthias
>
> On 4/24/18 8:18 AM, vito jeng wrote:
> > Hi, Guozhang,
> >
> > Thanks for the comment!
> >
> >
> >
> > Hi, Bill,
> >
> > I'll try to make some update to make the KIP better.
> >
> > Thanks for the comment!
> >
> >
> > ---
> > Vito
> >
> > On Sat, Apr 21, 2018 at 5:40 AM, Bill Bejeck  wrote:
> >
> >> Hi Vito,
> >>
> >> Thanks for the KIP, overall it's a +1 from me.
> >>
> >> At this point, the only thing I would change is possibly removing the
> >> listing of all methods called by the user and the listing of all store
> >> types and focus on what states result in which exceptions thrown to the
> >> user.
> >>
> >> Thanks,
> >> Bill
> >>
> >> On Fri, Apr 20, 2018 at 2:10 PM, Guozhang Wang 
> wrote:
> >>
> >>> Thanks for the KIP Vito!
> >>>
> >>> I made a pass over the wiki and it looks great to me. I'm +1 on the
> KIP.
> >>>
> >>> About the base class InvalidStateStoreException itself, I'd actually
> >>> suggest we do not deprecate it but still expose it as part of the
> public
> >>> API, for people who do not want to handle these cases differently (if
> we
> >>> deprecate it then we are enforcing them to capture all three exceptions
> >>> one-by-one).
> >>>
> >>>
> >>> Guozhang
> >>>
> >>>
> >>> On Fri, Apr 20, 2018 at 9:14 AM, John Roesler 
> wrote:
> >>>
>  Hi Vito,
> 
>  Thanks for the KIP!
> 
>  I think it's much nicer to give callers different exceptions to tell
> >> them
>  whether the state store got migrated, whether it's still initializing,
> >> or
>  whether there's some unrecoverable error.
> 
>  In the KIP, it's typically not necessary to discuss non-user-facing
> >>> details
>  such as what exceptions we will throw internally. The KIP is primarily
> >> to
>  discuss public interface changes.
> 
>  You might consider simply removing all the internal details from the
> >> KIP,
>  which will have the dual advantage that it makes the KIP smaller and
> >>> easier
>  to agree on, as well as giving you more freedom in the internal
> details
>  when it comes to implementation.
> 
>  I like your decision to have your refined exceptions extend
>  InvalidStateStoreException to ensure backward compatibility. Since we
> >>> want
>  to encourage callers to catch the more specific exceptions, and we
> >> don't
>  expect to ever throw a raw InvalidStateStoreException anymore, you
> >> might
>  consider adding the @Deprecated annotation to
> >> InvalidStateStoreException.
>  This will gently encourage callers to migrate to the new exception and
> >>> open
>  the possibility of removing InvalidStateStoreException entirely in a
> >>> future
>  release.
> 
>  Thanks,
>  -John

Re: [VOTE] 2.0.0 RC0

2018-06-22 Thread Rajini Sivaram
Any and all testing is welcome, but testing in the following areas would be
particularly helpful:

   1. Performance and stress testing. Heroku and LinkedIn have helped with this
   in the past (and issues have been found and fixed).
   2. Client developers can verify that their clients can produce/consume
compressed/uncompressed data to/from 2.0.0 brokers
   3. End users can verify that their apps work correctly with the new
   release.

Thank you!

Rajini

On Thu, Jun 21, 2018 at 12:24 PM, Rajini Sivaram 
wrote:

> Sorry, the documentation does go live with the RC (thanks to Ismael for
> pointing this out), so here are the links:
>
> * Documentation:
>
> http://kafka.apache.org/20/documentation.html
>
>
> * Protocol:
>
> http://kafka.apache.org/20/protocol.html
>
>
>
> Regards,
>
>
> Rajini
>
>
> On Wed, Jun 20, 2018 at 9:08 PM, Rajini Sivaram 
> wrote:
>
>> Hello Kafka users, developers and client-developers,
>>
>>
>> This is the first candidate for release of Apache Kafka 2.0.0.
>>
>>
>> This is a major version release of Apache Kafka. It includes 40 new  KIPs
>> and
>>
>> several critical bug fixes. Please see the 2.0.0 release plan for more
>> details:
>>
>> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=80448820
>>
>>
>> A few notable highlights:
>>
>>- Prefixed wildcard ACLs (KIP-290), Fine grained ACLs for
>>CreateTopics (KIP-277)
>>- SASL/OAUTHBEARER implementation (KIP-255)
>>- Improved quota communication and customization of quotas (KIP-219,
>>KIP-257)
>>- Efficient memory usage for down conversion (KIP-283)
>>- Fix log divergence between leader and follower during fast leader
>>failover (KIP-279)
>>- Drop support for Java 7 and remove deprecated code including old
>>scala clients
>>- Connect REST extension plugin, support for externalizing secrets
>>and improved error handling (KIP-285, KIP-297, KIP-298 etc.)
>>- Scala API for Kafka Streams and other Streams API improvements
>>(KIP-270, KIP-150, KIP-245, KIP-251 etc.)
>>
>>
>> Release notes for the 2.0.0 release:
>>
>> http://home.apache.org/~rsivaram/kafka-2.0.0-rc0/RELEASE_NOTES.html
>>
>>
>> *** Please download, test and vote by Monday, June 25, 4pm PT
>>
>>
>> Kafka's KEYS file containing PGP keys we use to sign the release:
>>
>> http://kafka.apache.org/KEYS
>>
>>
>> * Release artifacts to be voted upon (source and binary):
>>
>> http://home.apache.org/~rsivaram/kafka-2.0.0-rc0/
>>
>>
>> * Maven artifacts to be voted upon:
>>
>> https://repository.apache.org/content/groups/staging/
>>
>>
>> * Javadoc:
>>
>> http://home.apache.org/~rsivaram/kafka-2.0.0-rc0/javadoc/
>>
>>
>> * Tag to be voted upon (off 2.0 branch) is the 2.0.0 tag:
>>
>> https://github.com/apache/kafka/tree/2.0.0-rc0
>>
>>
>> * Documentation:
>>
>> http://home.apache.org/~rsivaram/kafka-2.0.0-rc0/kafka_2.11-
>> 2.0.0-site-docs.tgz
>>
>> (Since documentation cannot go live until 2.0.0 is released, please
>> download and verify)
>>
>>
>> * Successful Jenkins builds for the 2.0 branch:
>>
>> Unit/integration tests: https://builds.apache.org/job/kafka-2.0-jdk8/48/
>>
>> System tests: https://jenkins.confluent.io/job/system-test-kafka/jo
>> b/2.0/6/ (2 failures are known flaky tests)
>>
>>
>>
>> Please test and verify the release artifacts and submit a vote for this RC
>> or report any issues so that we can fix them and roll out a new RC ASAP!
>>
>> Although this release vote requires PMC votes to pass, testing, votes,
>> and bug
>> reports are valuable and appreciated from everyone.
>>
>>
>> Thanks,
>>
>>
>> Rajini
>>
>>
>>
>


[jira] [Created] (KAFKA-7090) Zookeeper client setting in server-properties

2018-06-22 Thread Christian Tramnitz (JIRA)
Christian Tramnitz created KAFKA-7090:
-

 Summary: Zookeeper client setting in server-properties
 Key: KAFKA-7090
 URL: https://issues.apache.org/jira/browse/KAFKA-7090
 Project: Kafka
  Issue Type: New Feature
  Components: config, documentation
Reporter: Christian Tramnitz


There are several Zookeeper client settings that may be used to connect to ZK.

Currently, it seems only very few zookeeper.* settings are supported in Kafka's 
server.properties file. Wouldn't it make sense to support all zookeeper client 
settings there or where would that need to go?

I.e. for using Zookeeper 3.5 with TLS enabled, the following properties are 
required:

zookeeper.clientCnxnSocket
zookeeper.client.secure
zookeeper.ssl.keyStore.location
zookeeper.ssl.keyStore.password
zookeeper.ssl.trustStore.location
zookeeper.ssl.trustStore.password

It's obviously possible to pass them through "-D", but especially for the 
keystore password, I'd be more comfortable with this sitting in the properties 
file than being visible in the process list...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[VOTE] KIP-313: Add KStream.flatTransform and KStream.flatTransformValues

2018-06-22 Thread Bruno Cadonna
Hi list,

I would like to voting on this KIP.

I created a first PR[1] that adds flatTransform. Once I get some
feedback, I will start work on flatTransformValues.

Best regards,
Bruno

[1] https://github.com/apache/kafka/pull/5273


[DISCUSS] KIP-308: Support dynamic update of max.connections.per.ip/max.connections.per.ip.overrides configs

2018-06-22 Thread Manikumar
Hi all,

I have created a KIP to add support for dynamic update of
max.connections.per.ip/max.connections.per.ip.overrides configs

*https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=85474993
*

Any feedback is appreciated.

Thanks