[jira] [Commented] (KAFKA-4556) unordered messages when multiple topics are combined in single topic through stream

2017-01-10 Thread Savdeep Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15817551#comment-15817551
 ] 

Savdeep Singh commented on KAFKA-4556:
--

Can you give a specific example of stream windowing/aggregates/joins/etc which 
can solve this issue?

> unordered messages when multiple topics are combined in single topic through 
> stream
> ---
>
> Key: KAFKA-4556
> URL: https://issues.apache.org/jira/browse/KAFKA-4556
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, producer , streams
>Affects Versions: 0.10.0.1
>Reporter: Savdeep Singh
> Attachments: stream topology.png
>
>
> When binding builder with multiple topics, single resultant topic has 
> unordered set of messages.
> This issue is at millisecond level. When messages with same milisecond level 
> are added in topics.
> Scenario :  (1 producer : p1 , 2 topics t1 and t2, streams pick form these 2 
> topics and save in resulting t3 topic, 4 partitions of t3 and 4 consumers of 
> 4 partitions )
> Case: When p1 adds messages with same millisecond timestamp into t1 and t2 . 
> Stream combine and form t3. When this t3 is consumed by consumer, it has 
> different order of same millisecond messages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-3042) updateIsr should stop after failed several times due to zkVersion issue

2017-01-10 Thread Pengwei (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15817467#comment-15817467
 ] 

Pengwei edited comment on KAFKA-3042 at 1/11/17 7:27 AM:
-

[~lindong]  It seems I still encounter this issue after apply your patch in our 
test enviroment
The log is also logging the message "Cached zkVersion " and cannot recover

The reproducer step we used is to stop three broker's network some time and 
restart the network, then we found one of the brokers keep printing the cached 
zkversion log


was (Author: pengwei):
[~lindong]  It seems I still encounter this issue after apply your patch in our 
test enviroment
The log is also logging the message "Cached zkVersion " and cannot recover

> updateIsr should stop after failed several times due to zkVersion issue
> ---
>
> Key: KAFKA-3042
> URL: https://issues.apache.org/jira/browse/KAFKA-3042
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
> Environment: jdk 1.7
> centos 6.4
>Reporter: Jiahongchao
>Assignee: Dong Lin
>  Labels: reliability
> Fix For: 0.10.2.0
>
> Attachments: controller.log, server.log.2016-03-23-01, 
> state-change.log
>
>
> sometimes one broker may repeatly log
> "Cached zkVersion 54 not equal to that in zookeeper, skip updating ISR"
> I think this is because the broker consider itself as the leader in fact it's 
> a follower.
> So after several failed tries, it need to find out who is the leader



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (KAFKA-3042) updateIsr should stop after failed several times due to zkVersion issue

2017-01-10 Thread Pengwei (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengwei reopened KAFKA-3042:


> updateIsr should stop after failed several times due to zkVersion issue
> ---
>
> Key: KAFKA-3042
> URL: https://issues.apache.org/jira/browse/KAFKA-3042
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
> Environment: jdk 1.7
> centos 6.4
>Reporter: Jiahongchao
>Assignee: Dong Lin
>  Labels: reliability
> Fix For: 0.10.2.0
>
> Attachments: controller.log, server.log.2016-03-23-01, 
> state-change.log
>
>
> sometimes one broker may repeatly log
> "Cached zkVersion 54 not equal to that in zookeeper, skip updating ISR"
> I think this is because the broker consider itself as the leader in fact it's 
> a follower.
> So after several failed tries, it need to find out who is the leader



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4610) getting error:Batch containing 3 record(s) expired due to timeout while requesting metadata from brokers for test2R2P2-1

2017-01-10 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15817473#comment-15817473
 ] 

Ewen Cheslack-Postava commented on KAFKA-4610:
--

The current timeout for such requests defaults to 30s. If nothing else is going 
wrong, then this seems to suggest the broker is taking longer than that to 
restart or is failing for longer than that for some reason.

The repeated errors trying to request metadata from a specific broker suggests 
that, despite possibly having multiple bootstrap brokers, you may be hitting 
https://issues.apache.org/jira/browse/KAFKA-1843 I think in your case you may 
be hitting a case where the client repeated tries to use the broker that is 
currently offline and therefore repeatedly times out.

Unfortunately, I don't have a quick solution for this -- we'd need to 
concretely track down the root cause and then address it.

> getting error:Batch containing 3 record(s) expired due to timeout while 
> requesting metadata from brokers for test2R2P2-1
> 
>
> Key: KAFKA-4610
> URL: https://issues.apache.org/jira/browse/KAFKA-4610
> Project: Kafka
>  Issue Type: Bug
> Environment: Dev
>Reporter: sandeep kumar singh
>
> i a getting below error when running producer client, which take messages 
> from an input file kafka_message.log. this log file is pilled with 10 
> records per second of each message of length 4096
> error - 
> [2017-01-09 14:45:24,813] ERROR Error when sending message to topic test2R2P2 
> with key: null, value: 4096 bytes with error: 
> (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
> org.apache.kafka.common.errors.TimeoutException: Batch containing 3 record(s) 
> expired due to timeout while requesting metadata from brokers for test2R2P2-0
> [2017-01-09 14:45:24,816] ERROR Error when sending message to topic test2R2P2 
> with key: null, value: 4096 bytes with error: 
> (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
> org.apache.kafka.common.errors.TimeoutException: Batch containing 3 record(s) 
> expired due to timeout while requesting metadata from brokers for test2R2P2-0
> [2017-01-09 14:45:24,816] ERROR Error when sending message to topic test2R2P2 
> with key: null, value: 4096 bytes with error: 
> (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
> org.apache.kafka.common.errors.TimeoutException: Batch containing 3 record(s) 
> expired due to timeout while requesting metadata from brokers for test2R2P2-0
> [2017-01-09 14:45:24,816] ERROR Error when sending message to topic test2R2P2 
> with key: null, value: 4096 bytes with error: 
> (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
> org.apache.kafka.common.errors.TimeoutException: Batch containing 3 record(s) 
> expired due to timeout while requesting metadata from brokers for test2R2P2-0
> [2017-01-09 14:45:24,816] ERROR Error when sending message to topic test2R2P2 
> with key: null, value: 4096 bytes with error: 
> (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
> org.apache.kafka.common.errors.TimeoutException: Batch containing 3 record(s) 
> expired due to timeout while requesting metadata from brokers for test2R2P2-0
> command i run :
> $ bin/kafka-console-producer.sh --broker-list x.x.x.x:,x.x.x.x: 
> --batch-size 1000 --message-send-max-retries 10 --request-required-acks 1 
> --topic test2R2P2 <~/kafka_message.log
> there are 2 brokers running and the topic has partitions = 2 and replication 
> factor 2. 
> Could you please help me understand what does that error means?
> also i see message loss when i manually restart one of the broker and while 
> kafak-producer-perf-test command is running? is this a expected behavior?
> thanks
> Sandeep



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3042) updateIsr should stop after failed several times due to zkVersion issue

2017-01-10 Thread Pengwei (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15817467#comment-15817467
 ] 

Pengwei commented on KAFKA-3042:


[~lindong]  It seems I still encounter this issue after apply your patch in our 
test enviroment
The log is also logging the message "Cached zkVersion " and cannot recover

> updateIsr should stop after failed several times due to zkVersion issue
> ---
>
> Key: KAFKA-3042
> URL: https://issues.apache.org/jira/browse/KAFKA-3042
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
> Environment: jdk 1.7
> centos 6.4
>Reporter: Jiahongchao
>Assignee: Dong Lin
>  Labels: reliability
> Fix For: 0.10.2.0
>
> Attachments: controller.log, server.log.2016-03-23-01, 
> state-change.log
>
>
> sometimes one broker may repeatly log
> "Cached zkVersion 54 not equal to that in zookeeper, skip updating ISR"
> I think this is because the broker consider itself as the leader in fact it's 
> a follower.
> So after several failed tries, it need to find out who is the leader



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-109: Old Consumer Deprecation

2017-01-10 Thread Ewen Cheslack-Postava
It might also be helpful for anyone who has done a bit of thinking around
the migration story to dump their thoughts into the JIRA re: the plan.
There are also questions I would have around what the exact requirements
are. For example, if supporting auto commit is required, things get a lot
hairier. If commits are explicit and standard partition assignors are used,
then we may be able to get away with something relatively cheap
(implementation-wise), e.g. you deploy with both clients but can start
accepting data from the new consumer as soon as assignments match between
the old & new (with some coordination to deal with timing differences
between groups).

It's really hard to judge how aggressive we might want to be with
deprecating without any idea what the LOE is to provide the transition
tool, and difficult to plan for building the transition tool as a prereq
for deprecation if we don't have any rough LOE. If the LOE is small enough,
maybe deprecating now is fine as long as we still provide a big enough
window for transition after the tool is built.

I also think it's worth pointing out that the other important impact of
deprecation is that folks should stop writing *new* code with the
deprecated consumer. If we can encourage folks to start using the new
consumer now, it makes their transition experience better since they have
fewer apps to move. For this reason alone, I'd like to deprecate the old
consumer before having the full migration story (with the docs/release note
that we'll provide something with plenty of time to make the transition
before removing the client entirely).

-Ewen

On Tue, Jan 10, 2017 at 2:11 PM, Ismael Juma  wrote:

> Hi Renu,
>
> 0.10 was released in May. 2016. 0.11 will be not be before May 2017 (it
> could be later if the next release turns out to be 0.10.3). So the most
> recent data indicates a minimum of 1 year between major releases, but no
> decision has been made on future major release yet.
>
> The impact would indeed be build warnings, but those can be suppressed
> easily in the build config or via @SupressWarnings annotations. Do you use
> the consumer API directly or do you have a company wrapper?
>
> Ismael
>
> On Tue, Jan 10, 2017 at 8:53 PM, Renu Tewari  wrote:
>
> > Hi Ismael,
> > What are the expected timelines we are talking about between the major
> > releases? At LI we are expecting to have atleast 1 year between the old
> > consumer deprecation and removal so we have enough time to upgrade all
> > applications. The rollout to new consumer has hit many hurdles so hasn't
> > proceeded at the expected pace. What impact would an official deprecation
> > have on applications?  Any warnings would be disruptive and would prefer
> > that happens when there is a migration plan in place so we have a bound
> on
> > how long it will take. There are too many unknowns at this time.
> >
> > Thoughts on timelines?
> >
> > regards
> > Renu
> >
> > On Mon, Jan 9, 2017 at 6:34 PM, Ismael Juma  wrote:
> >
> > > Hi Joel,
> > >
> > > Great to hear that LinkedIn is likely to implement KAFKA-4513. :)
> > >
> > > Yes, the KIP as it stands is a compromise given the situation. We could
> > > push the deprecation to the subsequent release: likely to be 0.11.0.0
> > since
> > > there are a number of KIPs that require message format changes. This
> > would
> > > mean that the old consumers would not be removed before 0.12.0.0 (the
> > major
> > > release after 0.11.0.0). Would that work better for you all?
> > >
> > > Ismael
> > >
> > > On Tue, Jan 10, 2017 at 12:52 AM, Joel Koshy 
> > wrote:
> > >
> > > > >
> > > > >
> > > > > The ideal scenario would be for us to provide a tool for no
> downtime
> > > > > migration as discussed in the original thread (I filed
> > > > > https://issues.apache.org/jira/browse/KAFKA-4513 in response to
> that
> > > > > discussion). There are a few issues, however:
> > > > >
> > > > >- There doesn't seem to be much demand for it (outside of
> > LinkedIn,
> > > at
> > > > >least)
> > > > >- No-one is working on it or has indicated that they are
> planning
> > to
> > > > >work on it
> > > > >- It's a non-trivial change and it requires a good amount of
> > testing
> > > > to
> > > > >ensure it works as expected
> > > > >
> > > >
> > > > For LinkedIn: while there are a few consuming applications for which
> > the
> > > > current shut-down and restart approach to migration will suffice, I
> > doubt
> > > > we will be able to do this for majority of services that are outside
> > our
> > > > direct control. Given that a seamless migration is a pre-req for us
> to
> > > > switch to the new consumer widely (there are a few use-cases already
> on
> > > it)
> > > > it is something that we (LinkedIn) will likely implement although we
> > > > haven't done further brainstorming/improvements beyond what was
> > proposed
> > > in
> > > > the other deprecation thread.
> > > >
> > > >
> > > > > In the meantime, we have this suboptimal situation wher

Re: [VOTE] KIP-109: Old Consumer Deprecation

2017-01-10 Thread Ewen Cheslack-Postava
Ismael,

Is that regardless of whether it ends up being a major/minor version? i.e.
given the way we've phrased (and I think started to follow through on)
deprecations, if the next releases were 0.10.3.0 and then 0.11.0.0, the
deprecation period would only be one release. That would be a tiny window
for a huge deprecation. If the next release ended up 0.11.0.0, then we'd
wait (presumably multiple releases until) 0.12.0.0 which could be something
like a year.

I think we should deprecate the APIs ASAP since they are effectively
unmaintained (or very minimally maintained at best). And I'd actually even
like to do so in 0.10.2.0.

Perhaps we should consider a slightly customized policy instead? Major
deprecations like this might require something slightly different. For
example, I think a KIP + release notes that explain we're marking the
consumer as deprecated now but it will continue to exist for at least 1
year (regardless of release versions) and will be removed in the next major
release *after* 1 year would give users plenty of warning and not result in
any weirdness if a major version bump happens relatively soon.

(Sorry to drag this into the VOTE thread... If we can agree on that
deprecation/removal schedule, I'd love to still get this in by feature
freeze, especially since the patch is presumably trivial.)

-Ewen

On Tue, Jan 10, 2017 at 11:58 AM, Gwen Shapira  wrote:

> +1
>
> On Mon, Jan 9, 2017 at 8:58 AM, Vahid S Hashemian
>  wrote:
> > Happy Monday,
> >
> > I'd like to thank everyone who participated in the discussion around this
> > KIP and shared their opinion.
> >
> > The only concern that was raised was not having a defined migration plan
> > yet for existing users of the old consumer.
> > I hope that responses to this concern (on the discussion thread) have
> been
> > satisfactory.
> >
> > Given the short time we have until the 0.10.2.0 cut-off date I'd like to
> > start voting on this KIP.
> >
> > KIP:
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 109%3A+Old+Consumer+Deprecation
> > Discussion thread:
> > https://www.mail-archive.com/dev@kafka.apache.org/msg63427.html
> >
> > Thanks.
> > --Vahid
> >
> >
>
>
>
> --
> Gwen Shapira
> Product Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter | blog
>


[jira] [Commented] (KAFKA-4604) Gradle Eclipse plugin creates projects for non project subfolders

2017-01-10 Thread Dhwani Katagade (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15816852#comment-15816852
 ] 

Dhwani Katagade commented on KAFKA-4604:


Hi

I have tried it with Eclipse since I use Eclipse for an IDE. On a dev list 
thread Ismael had confirmed that the Intellij setup works just fine. I can 
believe that since the Intellij model is slightly different.
http://mail-archives.apache.org/mod_mbox/kafka-dev/201701.mbox/%3CCAD5tkZa12yGShXbbDXGUgMK7Upo-qQQx4tMn88xjYP%2B5nMfMnA%40mail.gmail.com%3E


> Gradle Eclipse plugin creates projects for non project subfolders
> -
>
> Key: KAFKA-4604
> URL: https://issues.apache.org/jira/browse/KAFKA-4604
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.11.0.0
> Environment: Tried with Gradle 3.2.1 and Eclipse Neon
>Reporter: Dhwani Katagade
>Assignee: Dhwani Katagade
>Priority: Minor
>  Labels: build, easyfix, patch
>
> Running the command *./gradlew eclipse* generates .project and .classpath 
> files for all projects. But it also generates these files for the root 
> project folder and the connect subfolder that holds the 4 connector projects 
> even though these folders are not actual project folders.
> The unnecessary connect project is benign, but the unnecessary kafka project 
> created for the root folder has a side effect. The root folder has a bin 
> directory that holds some scripts. When a _Clean all projects_ is done in 
> Eclipse, it cleans up the scripts in the bin directory. These have to be 
> restored by running *git checkout \-\- *. This same could become a 
> problem for the connect project as well if tomorrow we place some files under 
> connect/bin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3452) Support session windows

2017-01-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15816729#comment-15816729
 ] 

ASF GitHub Bot commented on KAFKA-3452:
---

GitHub user mjsax opened a pull request:

https://github.com/apache/kafka/pull/2342

KAFKA-3452: follow-up -- introduce SesssionWindows

 - TimeWindows represent half-open time intervals while SessionWindows 
represent closed time intervals

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mjsax/kafka 
kafka-3452-session-window-follow-up

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2342.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2342


commit ac894ef83a3e937a06bfc3cc33febcf0dbdb7aa9
Author: Matthias J. Sax 
Date:   2017-01-11T01:06:53Z

KAFKA-3452: follow-up -- introduce SesssionWindows
 - TimeWindows represent half-open time intervals while SessionWindows 
represent closed time intervals




> Support session windows
> ---
>
> Key: KAFKA-3452
> URL: https://issues.apache.org/jira/browse/KAFKA-3452
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Damian Guy
>  Labels: api, kip
> Fix For: 0.10.2.0
>
>
> The Streams DSL currently does not provide session window as in the DataFlow 
> model. We have seen some common use cases for this feature and it's better 
> adding this support asap.
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-94+Session+Windows



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2342: KAFKA-3452: follow-up -- introduce SesssionWindows

2017-01-10 Thread mjsax
GitHub user mjsax opened a pull request:

https://github.com/apache/kafka/pull/2342

KAFKA-3452: follow-up -- introduce SesssionWindows

 - TimeWindows represent half-open time intervals while SessionWindows 
represent closed time intervals

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mjsax/kafka 
kafka-3452-session-window-follow-up

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2342.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2342


commit ac894ef83a3e937a06bfc3cc33febcf0dbdb7aa9
Author: Matthias J. Sax 
Date:   2017-01-11T01:06:53Z

KAFKA-3452: follow-up -- introduce SesssionWindows
 - TimeWindows represent half-open time intervals while SessionWindows 
represent closed time intervals




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-4064) Add support for infinite endpoints for range queries in Kafka Streams KV stores

2017-01-10 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-4064:
-
Assignee: Roger Hoover  (was: Xavier Léauté)

> Add support for infinite endpoints for range queries in Kafka Streams KV 
> stores
> ---
>
> Key: KAFKA-4064
> URL: https://issues.apache.org/jira/browse/KAFKA-4064
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Roger Hoover
>Assignee: Roger Hoover
>Priority: Minor
> Fix For: 0.10.2.0
>
>
> In some applications, it's useful to iterate over the key-value store either:
> 1. from the beginning up to a certain key
> 2. from a certain key to the end
> We can add two new methods rangeUtil() and rangeFrom() easily to support this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4064) Add support for infinite endpoints for range queries in Kafka Streams KV stores

2017-01-10 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-4064:
-
Status: Patch Available  (was: Open)

> Add support for infinite endpoints for range queries in Kafka Streams KV 
> stores
> ---
>
> Key: KAFKA-4064
> URL: https://issues.apache.org/jira/browse/KAFKA-4064
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Roger Hoover
>Assignee: Roger Hoover
>Priority: Minor
> Fix For: 0.10.2.0
>
>
> In some applications, it's useful to iterate over the key-value store either:
> 1. from the beginning up to a certain key
> 2. from a certain key to the end
> We can add two new methods rangeUtil() and rangeFrom() easily to support this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3835) Streams is creating two ProducerRecords for each send via RecordCollector

2017-01-10 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-3835:
-
Fix Version/s: (was: 0.10.2.0)
   0.10.3.0

> Streams is creating two ProducerRecords for each send via RecordCollector
> -
>
> Key: KAFKA-3835
> URL: https://issues.apache.org/jira/browse/KAFKA-3835
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.0.0
>Reporter: Damian Guy
>Priority: Minor
>  Labels: newbie
> Fix For: 0.10.3.0
>
>
> The RecordCollector.send(..) method below, currently receives a 
> ProducerRecord from its caller and then creates another one to forward on to 
> its producer.  The creation of 2 ProducerRecords should be removed.
> {code}
> public  void send(ProducerRecord record, Serializer 
> keySerializer, Serializer valueSerializer,
> StreamPartitioner partitioner)
> {code}
> We could replace the above method with
> {code}
> public  void send(String topic,
> K key,
> V value,
> Integer partition,
> Long timestamp,
> Serializer keySerializer,
> Serializer valueSerializer,
> StreamPartitioner partitioner)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3714) Allow users greater access to register custom streams metrics

2017-01-10 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-3714:
-
Fix Version/s: (was: 0.10.2.0)
   0.10.3.0

> Allow users greater access to register custom streams metrics
> -
>
> Key: KAFKA-3714
> URL: https://issues.apache.org/jira/browse/KAFKA-3714
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Jeff Klukas
>Assignee: Eno Thereska
>Priority: Minor
>  Labels: api
> Fix For: 0.10.3.0
>
>
> Copying in some discussion that originally appeared in 
> https://github.com/apache/kafka/pull/1362#issuecomment-219064302
> Kafka Streams is largely a higher-level abstraction on top of producers and 
> consumers, and it seems sensible to match the KafkaStreams interface to that 
> of KafkaProducer and KafkaConsumer where possible. For producers and 
> consumers, the metric registry is internal and metrics are only exposed as an 
> unmodifiable map. This allows users to access client metric values for use in 
> application health checks, etc., but doesn't allow them to register new 
> metrics.
> That approach seems reasonable if we assume that a user interested in 
> defining custom metrics is already going to be using a separate metrics 
> library. In such a case, users will likely find it easier to define metrics 
> using whatever library they're familiar with rather than learning the API for 
> Kafka's Metrics class. Is this a reasonable assumption?
> If we want to expose the Metrics instance so that users can define arbitrary 
> metrics, I'd argue that there's need for documentation updates. In 
> particular, I find the notion of metric tags confusing. Tags can be defined 
> in a MetricConfig when the Metrics instance is constructed, 
> StreamsMetricsImpl is maintaining its own set of tags, and users can set tag 
> overrides.
> If a user were to get access to the Metrics instance, they would be missing 
> the tags defined in StreamsMetricsImpl. I'm imagining that users would want 
> their custom metrics to sit alongside the predefined metrics with the same 
> tags, and users shouldn't be expected to manage those additional tags 
> themselves.
> So, why are we allowing users to define their own metrics via the 
> StreamsMetrics interface in the first place? Is it that we'd like to be able 
> to provide a built-in latency metric, but the definition depends on the 
> details of the use case so there's no generic solution? That would be 
> sufficient motivation for this special case of addLatencySensor. If we want 
> to continue down that path and give users access to define a wider range of 
> custom metrics, I'd prefer to extend the StreamsMetrics interface so that 
> users can call methods on that object, automatically getting the tags 
> appropriate for that instance rather than interacting with the raw Metrics 
> instance.
> ---
> Guozhang had the following comments:
> 1) For the producer/consumer cases, all internal metrics are provided and 
> abstracted from users, and they just need to read the documentation to poll 
> whatever provided metrics that they are interested; and if they want to 
> define more metrics, they are likely to be outside the clients themselves and 
> they can use whatever methods they like, so Metrics do not need to be exposed 
> to users.
> 2) For streams, things are a bit different: users define the computational 
> logic, which becomes part of the "Streams Client" processing and may be of 
> interests to be monitored by user themselves; think of a customized processor 
> that sends an email to some address based on a condition, and users want to 
> monitor the average rate of emails sent. Hence it is worth considering 
> whether or not they should be able to access the Metrics instance to define 
> their own along side the pre-defined metrics provided by the library.
> 3) Now, since the Metrics class was not previously designed for public usage, 
> it is not designed to be very user-friendly for defining sensors, especially 
> the semantics differences between name / scope / tags. StreamsMetrics tries 
> to hide some of these semantics confusion from users, but it still expose 
> tags and hence is not perfect in doing so. We need to think of a better 
> approach so that: 1) user defined metrics will be "aligned" (i.e. with the 
> same name prefix within a single application, with similar scope hierarchy 
> definition, etc) with library provided metrics, 2) natural APIs to do so.
> I do not have concrete ideas about 3) above on top of my head, comments are 
> more than welcomed.
> ---
> I'm not sure that I agree that 1) and 2) are truly different situations. A 
> user might choose to send email messages within a 

[jira] [Resolved] (KAFKA-1027) Add documentation for system tests

2017-01-10 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-1027.
--
   Resolution: Won't Fix
 Assignee: Ewen Cheslack-Postava
Fix Version/s: (was: 0.10.2.0)

This was for the old system test framework. There are some docs for the new 
setup in the README in the tests directory. If someone wants to follow up with 
more substantial docs or in the main, user-facing docs, we can file a separate 
JIRA.

> Add documentation for system tests
> --
>
> Key: KAFKA-1027
> URL: https://issues.apache.org/jira/browse/KAFKA-1027
> Project: Kafka
>  Issue Type: Sub-task
>  Components: website
>Reporter: Tejas Patil
>Assignee: Ewen Cheslack-Postava
>Priority: Minor
>  Labels: documentation
>
> Create a document describing following things: 
> - Overview of Kafka system test framework
> - how to run the entire system test suite 
> - how to run a specific system test
> - how to interpret the system test results
> - how to troubleshoot a failed test case
> - how to add new test module
> - how to add new test case



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #1821

2017-01-10 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] MINOR: Replacing for with foreach loop in stream test classes

--
[...truncated 18215 lines...]
org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
testSpecificPartition PASSED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
shouldThrowStreamsExceptionOnSubsequentCallIfASendFails STARTED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
shouldThrowStreamsExceptionOnSubsequentCallIfASendFails PASSED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
shouldThrowStreamsExceptionAfterMaxAttempts STARTED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
shouldThrowStreamsExceptionAfterMaxAttempts PASSED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
shouldRetryWhenTimeoutExceptionOccursOnSend STARTED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
shouldRetryWhenTimeoutExceptionOccursOnSend PASSED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
shouldThrowStreamsExceptionOnCloseIfASendFailed STARTED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
shouldThrowStreamsExceptionOnCloseIfASendFailed PASSED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
testStreamPartitioner STARTED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
testStreamPartitioner PASSED

org.apache.kafka.streams.processor.internals.PunctuationQueueTest > 
testPunctuationInterval STARTED

org.apache.kafka.streams.processor.internals.PunctuationQueueTest > 
testPunctuationInterval PASSED

org.apache.kafka.streams.processor.internals.AbstractTaskTest > 
shouldThrowProcessorStateExceptionOnInitializeOffsetsWhenAuthorizationException 
STARTED

org.apache.kafka.streams.processor.internals.AbstractTaskTest > 
shouldThrowProcessorStateExceptionOnInitializeOffsetsWhenAuthorizationException 
PASSED

org.apache.kafka.streams.processor.internals.AbstractTaskTest > 
shouldThrowProcessorStateExceptionOnInitializeOffsetsWhenKafkaException STARTED

org.apache.kafka.streams.processor.internals.AbstractTaskTest > 
shouldThrowProcessorStateExceptionOnInitializeOffsetsWhenKafkaException PASSED

org.apache.kafka.streams.processor.internals.AbstractTaskTest > 
shouldThrowWakeupExceptionOnInitializeOffsetsWhenWakeupException STARTED

org.apache.kafka.streams.processor.internals.AbstractTaskTest > 
shouldThrowWakeupExceptionOnInitializeOffsetsWhenWakeupException PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldUpdatePartitionHostInfoMapOnAssignment STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldUpdatePartitionHostInfoMapOnAssignment PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldThrowExceptionIfApplicationServerConfigIsNotHostPortPair STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldThrowExceptionIfApplicationServerConfigIsNotHostPortPair PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldMapUserEndPointToTopicPartitions STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldMapUserEndPointToTopicPartitions PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldAddUserDefinedEndPointToSubscription STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldAddUserDefinedEndPointToSubscription PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithStandbyReplicas STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithStandbyReplicas PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldNotLoopInfinitelyOnMissingMetadataAndShouldNotCreateRelatedTasks STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldNotLoopInfinitelyOnMissingMetadataAndShouldNotCreateRelatedTasks PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithNewTasks STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithNewTasks PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldExposeHostStateToTopicPartitionsOnAssignment STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldExposeHostStateToTopicPartitionsOnAssignment PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithStates STARTED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
testAssignWithStates PASSED

org.apache.kafka.streams.processor.internals.StreamPartitionAssignorTest > 
shouldThrowExceptionIfApplicationSe

Re: [DISCUSS] KIP-107: Add purgeDataBefore() API in AdminClient

2017-01-10 Thread Dong Lin
Bump up. I am going to initiate the vote If there is no further concern
with the KIP.

On Fri, Jan 6, 2017 at 11:23 PM, Dong Lin  wrote:

> Hey Mayuresh,
>
> Thanks for the comment. If the message's offset is below low_watermark,
> then it should have been deleted by log retention policy. Thus it is OK not
> to expose this message to consumer. Does this answer your question?
>
> Thanks,
> Dong
>
> On Fri, Jan 6, 2017 at 4:21 PM, Mayuresh Gharat <
> gharatmayures...@gmail.com> wrote:
>
>> Hi Dong,
>>
>> Thanks for the KIP.
>>
>> I had a question (which might have been answered before).
>>
>> 1) The KIP says that the low_water_mark will be updated periodically by
>> the
>> broker like high_water_mark.
>> Essentially we want to use low_water_mark for cases where an entire
>> segment
>> cannot be deleted because may be the segment_start_offset < PurgeOffset <
>> segment_end_offset, in which case we will set the low_water_mark to
>> PurgeOffset+1.
>>
>> 2) The KIP also says that messages below low_water_mark will not be
>> exposed
>> for consumers, which does make sense since we want say that data below
>> low_water_mark is purged.
>>
>> Looking at above conditions, does it make sense not to update the
>> low_water_mark periodically but only on PurgeRequest?
>> The reason being, if we update it periodically then as per 2) we will not
>> be allowing consumers to re-consume data that is not purged but is below
>> low_water_mark.
>>
>> Thanks,
>>
>> Mayuresh
>>
>>
>> On Fri, Jan 6, 2017 at 11:18 AM, Dong Lin  wrote:
>>
>> > Hey Jun,
>> >
>> > Thanks for reviewing the KIP!
>> >
>> > 1. The low_watermark will be checkpointed in a new file named
>> >  "replication-low-watermark-checkpoint". It will have the same format
>> as
>> > the existing replication-offset-checkpoint file. This allows us the keep
>> > the existing format of checkpoint files which maps TopicPartition to
>> Long.
>> > I just updated the "Public Interface" section in the KIP wiki to explain
>> > this file.
>> >
>> > 2. I think using low_watermark from leader to trigger log retention in
>> the
>> > follower will work correctly in the sense that all messages with offset
>> <
>> > low_watermark can be deleted. But I am not sure that the efficiency is
>> the
>> > same, i.e. offset of messages which should be deleted (i.e. due to time
>> or
>> > size-based log retention policy) will be smaller than low_watermark from
>> > the leader.
>> >
>> > For example, say both the follower and the leader have messages with
>> > offsets in range [0, 2000]. If the follower does log rolling slightly
>> later
>> > than leader, the segments on follower would be [0, 1001], [1002, 2000]
>> and
>> > segments on leader would be [0, 1000], [1001, 2000]. After leader
>> deletes
>> > the first segment, the low_watermark would be 1001. Thus the first
>> segment
>> > would stay on follower's disk unnecessarily which may double disk usage
>> at
>> > worst.
>> >
>> > Since this approach doesn't save us much, I am inclined to not include
>> this
>> > change to keep the KIP simple.
>> >
>> > Dong
>> >
>> >
>> >
>> > On Fri, Jan 6, 2017 at 10:05 AM, Jun Rao  wrote:
>> >
>> > > Hi, Dong,
>> > >
>> > > Thanks for the proposal. Looks good overall. A couple of comments.
>> > >
>> > > 1. Where is the low_watermark checkpointed? Is that
>> > > in replication-offset-checkpoint? If so, do we need to bump up the
>> > version?
>> > > Could you also describe the format change?
>> > >
>> > > 2. For topics with "delete" retention, currently we let each replica
>> > delete
>> > > old segments independently. With low_watermark, we could just let
>> leaders
>> > > delete old segments through the deletion policy and the followers will
>> > > simply delete old segments based on low_watermark. Not sure if this
>> saves
>> > > much, but is a potential option that may be worth thinking about.
>> > >
>> > > Jun
>> > >
>> > >
>> > >
>> > > On Wed, Jan 4, 2017 at 8:13 AM, radai 
>> > wrote:
>> > >
>> > > > one more example of complicated config - mirror maker.
>> > > >
>> > > > we definitely cant trust each and every topic owner to configure
>> their
>> > > > topics not to purge before they've been mirrored.
>> > > > which would mean there's a per-topic config (set by the owner) and a
>> > > > "global" config (where mirror makers are specified) and they need
>> to be
>> > > > "merged".
>> > > > for those topics that _are_ mirrored.
>> > > > which is a changing set of topics thats stored in an external system
>> > > > outside of kafka.
>> > > > if a topic is taken out of the mirror set the MM offset would be
>> > "frozen"
>> > > > at that point and prevent clean-up for all eternity, unless its
>> > > cleaned-up
>> > > > itself.
>> > > >
>> > > > ...
>> > > >
>> > > > complexity :-)
>> > > >
>> > > > On Wed, Jan 4, 2017 at 8:04 AM, radai 
>> > > wrote:
>> > > >
>> > > > > in summary - i'm not opposed to the idea of a per-topic clean up
>> > config
>> > > > > that tracks some set of consumer groups' offsets (w

[jira] [Created] (KAFKA-4613) Treat null-key records the same way for joins and aggreations

2017-01-10 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-4613:
--

 Summary: Treat null-key records the same way for joins and 
aggreations
 Key: KAFKA-4613
 URL: https://issues.apache.org/jira/browse/KAFKA-4613
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Matthias J. Sax


Currently, on aggregation records with null-key get dropped while for joins we 
raise an exception.

We might want to drop in both cases of raise an exception in both cases to be 
consistent.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-109: Old Consumer Deprecation

2017-01-10 Thread Ismael Juma
Hi Renu,

0.10 was released in May. 2016. 0.11 will be not be before May 2017 (it
could be later if the next release turns out to be 0.10.3). So the most
recent data indicates a minimum of 1 year between major releases, but no
decision has been made on future major release yet.

The impact would indeed be build warnings, but those can be suppressed
easily in the build config or via @SupressWarnings annotations. Do you use
the consumer API directly or do you have a company wrapper?

Ismael

On Tue, Jan 10, 2017 at 8:53 PM, Renu Tewari  wrote:

> Hi Ismael,
> What are the expected timelines we are talking about between the major
> releases? At LI we are expecting to have atleast 1 year between the old
> consumer deprecation and removal so we have enough time to upgrade all
> applications. The rollout to new consumer has hit many hurdles so hasn't
> proceeded at the expected pace. What impact would an official deprecation
> have on applications?  Any warnings would be disruptive and would prefer
> that happens when there is a migration plan in place so we have a bound on
> how long it will take. There are too many unknowns at this time.
>
> Thoughts on timelines?
>
> regards
> Renu
>
> On Mon, Jan 9, 2017 at 6:34 PM, Ismael Juma  wrote:
>
> > Hi Joel,
> >
> > Great to hear that LinkedIn is likely to implement KAFKA-4513. :)
> >
> > Yes, the KIP as it stands is a compromise given the situation. We could
> > push the deprecation to the subsequent release: likely to be 0.11.0.0
> since
> > there are a number of KIPs that require message format changes. This
> would
> > mean that the old consumers would not be removed before 0.12.0.0 (the
> major
> > release after 0.11.0.0). Would that work better for you all?
> >
> > Ismael
> >
> > On Tue, Jan 10, 2017 at 12:52 AM, Joel Koshy 
> wrote:
> >
> > > >
> > > >
> > > > The ideal scenario would be for us to provide a tool for no downtime
> > > > migration as discussed in the original thread (I filed
> > > > https://issues.apache.org/jira/browse/KAFKA-4513 in response to that
> > > > discussion). There are a few issues, however:
> > > >
> > > >- There doesn't seem to be much demand for it (outside of
> LinkedIn,
> > at
> > > >least)
> > > >- No-one is working on it or has indicated that they are planning
> to
> > > >work on it
> > > >- It's a non-trivial change and it requires a good amount of
> testing
> > > to
> > > >ensure it works as expected
> > > >
> > >
> > > For LinkedIn: while there are a few consuming applications for which
> the
> > > current shut-down and restart approach to migration will suffice, I
> doubt
> > > we will be able to do this for majority of services that are outside
> our
> > > direct control. Given that a seamless migration is a pre-req for us to
> > > switch to the new consumer widely (there are a few use-cases already on
> > it)
> > > it is something that we (LinkedIn) will likely implement although we
> > > haven't done further brainstorming/improvements beyond what was
> proposed
> > in
> > > the other deprecation thread.
> > >
> > >
> > > > In the meantime, we have this suboptimal situation where the old
> > > consumers
> > > > are close to unmaintained even though we don't say it outright: they
> > > don't
> > >
> > > get new features (basic things like security are missing) and bug fixes
> > are
> > > > rare. In practice, the old clients have been deprecated a while back,
> > we
> > > >
> > >
> > > Agreed that it is suboptimal, but the reality is that LI and I think a
> > few
> > > other companies are still to a large extent on the old consumer and
> will
> > be
> > > for at least a good part of this year. This does mean that we have the
> > > overhead of maintaining some internal workarounds for the old consumer.
> > >
> > >
> > > > just haven't made it official. This proposal is about rectifying that
> > so
> > > > that we communicate our intentions to our users more clearly. As
> Vahid
> > > > said, this KIP is not about changing how we maintain the existing
> code.
> > > >
> > > > The KIP that proposes the removal of all the old clients will be more
> > > > interesting, but it doesn't exist yet. :)
> > > >
> > >
> > > Deprecating *after* providing a sound migration path still seems to be
> > the
> > > right thing to do but if there isn't any demand for it then maybe
> that's
> > a
> > > reasonable compromise. I'm still surprised that more users aren't as
> > > impacted by this and as mentioned earlier, it could be an issue of
> > > awareness but I'm not sure that deprecating before a migration path is
> in
> > > place would be considered best-practice in raising awareness.
> > >
> > > Thanks,
> > >
> > > Joel
> > >
> > >
> > >
> > > >
> > > > Ismael
> > > >
> > > > On Fri, Jan 6, 2017 at 3:27 AM, Vahid S Hashemian <
> > > > vahidhashem...@us.ibm.com
> > > > > wrote:
> > > >
> > > > > One thing that probably needs some clarification is what is implied
> > by
> > > > > "deprecated" in the Kafka project.
> > >

Re: [DISCUSS] KIP-109: Old Consumer Deprecation

2017-01-10 Thread Joel Koshy
On Tue, Jan 10, 2017 at 12:53 PM, Renu Tewari  wrote:

> Hi Ismael,
> What are the expected timelines we are talking about between the major
> releases? At LI we are expecting to have atleast 1 year between the old
> consumer deprecation and removal so we have enough time to upgrade all
> applications. The rollout to new consumer has hit many hurdles so hasn't
> proceeded at the expected pace. What impact would an official deprecation
> have on applications?  Any warnings would be disruptive and would prefer
> that happens when there is a migration plan in place so we have a bound on
> how long it will take. There are too many unknowns at this time.
>
> Thoughts on timelines?
>
> regards
> Renu
>
> On Mon, Jan 9, 2017 at 6:34 PM, Ismael Juma  wrote:
>
> > Hi Joel,
> >
> > Great to hear that LinkedIn is likely to implement KAFKA-4513. :)
> >
> > Yes, the KIP as it stands is a compromise given the situation. We could
> > push the deprecation to the subsequent release: likely to be 0.11.0.0
> since
> > there are a number of KIPs that require message format changes. This
> would
> > mean that the old consumers would not be removed before 0.12.0.0 (the
> major
> > release after 0.11.0.0). Would that work better for you all?
>

It helps, but the main concern is deprecating before implementing the
migration path. So this means merging in the deprecation PR right after
cutting 0.10.2 is also going to be problematic since we release off trunk.
So we can prioritize working on KAFKA-4513.

@Ewen: good question on message format changes - I agree with Ismael that
for features such as a new compression scheme we can do without a format
change. I don't think we have any formal guidance on the scenarios that you
highlighted at this point so it may help to have a discussion on a separate
thread and codify that in our docs under a new "Kafka message and protocol
versioning" section.

Thanks,

Joel


> >
> > On Tue, Jan 10, 2017 at 12:52 AM, Joel Koshy 
> wrote:
> >
> > > >
> > > >
> > > > The ideal scenario would be for us to provide a tool for no downtime
> > > > migration as discussed in the original thread (I filed
> > > > https://issues.apache.org/jira/browse/KAFKA-4513 in response to that
> > > > discussion). There are a few issues, however:
> > > >
> > > >- There doesn't seem to be much demand for it (outside of
> LinkedIn,
> > at
> > > >least)
> > > >- No-one is working on it or has indicated that they are planning
> to
> > > >work on it
> > > >- It's a non-trivial change and it requires a good amount of
> testing
> > > to
> > > >ensure it works as expected
> > > >
> > >
> > > For LinkedIn: while there are a few consuming applications for which
> the
> > > current shut-down and restart approach to migration will suffice, I
> doubt
> > > we will be able to do this for majority of services that are outside
> our
> > > direct control. Given that a seamless migration is a pre-req for us to
> > > switch to the new consumer widely (there are a few use-cases already on
> > it)
> > > it is something that we (LinkedIn) will likely implement although we
> > > haven't done further brainstorming/improvements beyond what was
> proposed
> > in
> > > the other deprecation thread.
> > >
> > >
> > > > In the meantime, we have this suboptimal situation where the old
> > > consumers
> > > > are close to unmaintained even though we don't say it outright: they
> > > don't
> > >
> > > get new features (basic things like security are missing) and bug fixes
> > are
> > > > rare. In practice, the old clients have been deprecated a while back,
> > we
> > > >
> > >
> > > Agreed that it is suboptimal, but the reality is that LI and I think a
> > few
> > > other companies are still to a large extent on the old consumer and
> will
> > be
> > > for at least a good part of this year. This does mean that we have the
> > > overhead of maintaining some internal workarounds for the old consumer.
> > >
> > >
> > > > just haven't made it official. This proposal is about rectifying that
> > so
> > > > that we communicate our intentions to our users more clearly. As
> Vahid
> > > > said, this KIP is not about changing how we maintain the existing
> code.
> > > >
> > > > The KIP that proposes the removal of all the old clients will be more
> > > > interesting, but it doesn't exist yet. :)
> > > >
> > >
> > > Deprecating *after* providing a sound migration path still seems to be
> > the
> > > right thing to do but if there isn't any demand for it then maybe
> that's
> > a
> > > reasonable compromise. I'm still surprised that more users aren't as
> > > impacted by this and as mentioned earlier, it could be an issue of
> > > awareness but I'm not sure that deprecating before a migration path is
> in
> > > place would be considered best-practice in raising awareness.
> > >
> > > Thanks,
> > >
> > > Joel
> > >
> > >
> > >
> > > >
> > > > Ismael
> > > >
> > > > On Fri, Jan 6, 2017 at 3:27 AM, Vahid S Hashemian <
> > > > vahidhashem...@us.

Build failed in Jenkins: kafka-trunk-jdk8 #1166

2017-01-10 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] MINOR: Replacing for with foreach loop in stream test classes

--
[...truncated 18234 lines...]

org.apache.kafka.streams.KafkaStreamsTest > testCleanup STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCleanup PASSED

org.apache.kafka.streams.KafkaStreamsTest > testStartAndClose STARTED

org.apache.kafka.streams.KafkaStreamsTest > testStartAndClose PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCloseIsIdempotent STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCloseIsIdempotent PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCannotCleanupWhileRunning 
STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCannotCleanupWhileRunning PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCannotStartTwice STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCannotStartTwice PASSED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegion[0] STARTED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegion[0] PASSED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegion[1] STARTED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegion[1] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryState[0] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryState[0] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable[0] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable[0] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance[0] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance[0] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses[0] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses[0] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryState[1] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryState[1] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable[1] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable[1] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance[1] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance[1] FAILED
java.lang.AssertionError: Condition not met within timeout 6. Did not 
receive 1 number of records
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:259)
at 
org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinValuesRecordsReceived(IntegrationTestUtils.java:253)
at 
org.apache.kafka.streams.integration.QueryableStateIntegrationTest.waitUntilAtLeastNumRecordProcessed(QueryableStateIntegrationTest.java:669)
at 
org.apache.kafka.streams.integration.QueryableStateIntegrationTest.queryOnRebalance(QueryableStateIntegrationTest.java:350)

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses[1] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses[1] PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testShouldReadFromRegexAndNamedTopics STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testShouldReadFromRegexAndNamedTopics PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenCreated STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenCreated PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testMultipleConsumersCanReadFromPartitionedTopic STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testMultipleConsumersCanReadFromPartitionedTopic PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenDeleted STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenDeleted PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testNoMessagesSentExceptionFromOverlappingPatterns STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testNoMessagesSentExceptionFromOverlappingPatterns PASSED

org.apache.kafka.streams.integration.KStreamRepartition

[jira] [Commented] (KAFKA-4547) Consumer.position returns incorrect results for Kafka 0.10.1.0 client

2017-01-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15816287#comment-15816287
 ] 

ASF GitHub Bot commented on KAFKA-4547:
---

GitHub user vahidhashemian opened a pull request:

https://github.com/apache/kafka/pull/2341

KAFKA-4547: Avoid unnecessary offset commit that could lead to an invalid 
offset position if partition is paused



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka KAFKA-4547

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2341.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2341


commit e88617d0f2d3163d2733792f158483e41a2b40f5
Author: Vahid Hashemian 
Date:   2017-01-04T22:11:53Z

KAFKA-4547: Avoid unnecessary offset commit that could lead to an invalid 
offset position if partition is paused




> Consumer.position returns incorrect results for Kafka 0.10.1.0 client
> -
>
> Key: KAFKA-4547
> URL: https://issues.apache.org/jira/browse/KAFKA-4547
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.10.1.0
> Environment: Windows Kafka 0.10.1.0
>Reporter: Pranav Nakhe
>Assignee: Vahid Hashemian
>  Labels: clients
> Fix For: 0.10.2.0
>
> Attachments: issuerep.zip
>
>
> Consider the following code -
>   KafkaConsumer consumer = new 
> KafkaConsumer(props);
>   List listOfPartitions = new ArrayList();
>   for (int i = 0; i < 
> consumer.partitionsFor("IssueTopic").size(); i++) {
>   listOfPartitions.add(new TopicPartition("IssueTopic", 
> i));
>   }
>   consumer.assign(listOfPartitions);  
>   consumer.pause(listOfPartitions);
>   consumer.seekToEnd(listOfPartitions);
> //consumer.resume(listOfPartitions); -- commented out
>   for(int i = 0; i < listOfPartitions.size(); i++) {
>   
> System.out.println(consumer.position(listOfPartitions.get(i)));
>   }
>   
> I have created a topic IssueTopic with 3 partitions with a single replica on 
> my single node kafka installation (0.10.1.0)
> The behavior noticed for Kafka client 0.10.1.0 as against Kafka client 
> 0.10.0.1
> A) Initially when there are no messages on IssueTopic running the above 
> program returns
> 0.10.1.0   
> 0  
> 0  
> 0   
> 0.10.0.1
> 0
> 0
> 0
> B) Next I send 6 messages and see that the messages have been evenly 
> distributed across the three partitions. Running the above program now 
> returns 
> 0.10.1.0   
> 0  
> 0  
> 2  
> 0.10.0.1
> 2
> 2
> 2
> Clearly there is a difference in behavior for the 2 clients.
> Now after seekToEnd call if I make a call to resume (uncomment the resume 
> call in code above) then the behavior is
> 0.10.1.0   
> 2  
> 2  
> 2  
> 0.10.0.1
> 2
> 2
> 2
> This is an issue I came across when using the spark kafka integration for 
> 0.10. When I use kafka 0.10.1.0 I started seeing this issue. I had raised a 
> pull request to resolve that issue [SPARK-18779] but when looking at the 
> kafka client implementation/documentation now it seems the issue is with 
> kafka and not with spark. There does not seem to be any documentation which 
> specifies/implies that we need to call resume after seekToEnd for position to 
> return the correct value. Also there is a clear difference in the behavior in 
> the two kafka client implementations. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2341: KAFKA-4547: Avoid unnecessary offset commit that c...

2017-01-10 Thread vahidhashemian
GitHub user vahidhashemian opened a pull request:

https://github.com/apache/kafka/pull/2341

KAFKA-4547: Avoid unnecessary offset commit that could lead to an invalid 
offset position if partition is paused



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka KAFKA-4547

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2341.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2341


commit e88617d0f2d3163d2733792f158483e41a2b40f5
Author: Vahid Hashemian 
Date:   2017-01-04T22:11:53Z

KAFKA-4547: Avoid unnecessary offset commit that could lead to an invalid 
offset position if partition is paused




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2963) Replace server internal usage of TopicAndPartition with TopicPartition

2017-01-10 Thread Mickael Maison (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15816186#comment-15816186
 ] 

Mickael Maison commented on KAFKA-2963:
---

[~sinus] Are you still working on this ?
If not, do you mind if I grab it ?

> Replace server internal usage of TopicAndPartition with TopicPartition
> --
>
> Key: KAFKA-2963
> URL: https://issues.apache.org/jira/browse/KAFKA-2963
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Grant Henke
>Assignee: Jakub Nowak
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-109: Old Consumer Deprecation

2017-01-10 Thread Renu Tewari
Hi Ismael,
What are the expected timelines we are talking about between the major
releases? At LI we are expecting to have atleast 1 year between the old
consumer deprecation and removal so we have enough time to upgrade all
applications. The rollout to new consumer has hit many hurdles so hasn't
proceeded at the expected pace. What impact would an official deprecation
have on applications?  Any warnings would be disruptive and would prefer
that happens when there is a migration plan in place so we have a bound on
how long it will take. There are too many unknowns at this time.

Thoughts on timelines?

regards
Renu

On Mon, Jan 9, 2017 at 6:34 PM, Ismael Juma  wrote:

> Hi Joel,
>
> Great to hear that LinkedIn is likely to implement KAFKA-4513. :)
>
> Yes, the KIP as it stands is a compromise given the situation. We could
> push the deprecation to the subsequent release: likely to be 0.11.0.0 since
> there are a number of KIPs that require message format changes. This would
> mean that the old consumers would not be removed before 0.12.0.0 (the major
> release after 0.11.0.0). Would that work better for you all?
>
> Ismael
>
> On Tue, Jan 10, 2017 at 12:52 AM, Joel Koshy  wrote:
>
> > >
> > >
> > > The ideal scenario would be for us to provide a tool for no downtime
> > > migration as discussed in the original thread (I filed
> > > https://issues.apache.org/jira/browse/KAFKA-4513 in response to that
> > > discussion). There are a few issues, however:
> > >
> > >- There doesn't seem to be much demand for it (outside of LinkedIn,
> at
> > >least)
> > >- No-one is working on it or has indicated that they are planning to
> > >work on it
> > >- It's a non-trivial change and it requires a good amount of testing
> > to
> > >ensure it works as expected
> > >
> >
> > For LinkedIn: while there are a few consuming applications for which the
> > current shut-down and restart approach to migration will suffice, I doubt
> > we will be able to do this for majority of services that are outside our
> > direct control. Given that a seamless migration is a pre-req for us to
> > switch to the new consumer widely (there are a few use-cases already on
> it)
> > it is something that we (LinkedIn) will likely implement although we
> > haven't done further brainstorming/improvements beyond what was proposed
> in
> > the other deprecation thread.
> >
> >
> > > In the meantime, we have this suboptimal situation where the old
> > consumers
> > > are close to unmaintained even though we don't say it outright: they
> > don't
> >
> > get new features (basic things like security are missing) and bug fixes
> are
> > > rare. In practice, the old clients have been deprecated a while back,
> we
> > >
> >
> > Agreed that it is suboptimal, but the reality is that LI and I think a
> few
> > other companies are still to a large extent on the old consumer and will
> be
> > for at least a good part of this year. This does mean that we have the
> > overhead of maintaining some internal workarounds for the old consumer.
> >
> >
> > > just haven't made it official. This proposal is about rectifying that
> so
> > > that we communicate our intentions to our users more clearly. As Vahid
> > > said, this KIP is not about changing how we maintain the existing code.
> > >
> > > The KIP that proposes the removal of all the old clients will be more
> > > interesting, but it doesn't exist yet. :)
> > >
> >
> > Deprecating *after* providing a sound migration path still seems to be
> the
> > right thing to do but if there isn't any demand for it then maybe that's
> a
> > reasonable compromise. I'm still surprised that more users aren't as
> > impacted by this and as mentioned earlier, it could be an issue of
> > awareness but I'm not sure that deprecating before a migration path is in
> > place would be considered best-practice in raising awareness.
> >
> > Thanks,
> >
> > Joel
> >
> >
> >
> > >
> > > Ismael
> > >
> > > On Fri, Jan 6, 2017 at 3:27 AM, Vahid S Hashemian <
> > > vahidhashem...@us.ibm.com
> > > > wrote:
> > >
> > > > One thing that probably needs some clarification is what is implied
> by
> > > > "deprecated" in the Kafka project.
> > > > I googled it a bit and it doesn't seem that deprecation
> conventionally
> > > > implies termination of support (or anything that could negatively
> > impact
> > > > existing users). That's my interpretation too.
> > > > It would be good to know if Kafka follows a different interpretation
> of
> > > > the term.
> > > >
> > > > If my understanding of the term is correct, since we are not yet
> > > targeting
> > > > a certain major release in which the old consumer will be removed, I
> > > don't
> > > > see any harm in marking it as deprecated.
> > > > There will be enough time to plan and implement the migration, if the
> > > > community decides that's the way to go, before phasing it out.
> > > >
> > > > At the minimum new Kafka users will pick the Java consumer without
> any
> > > > confu

[jira] [Updated] (KAFKA-3853) Report offsets for empty groups in ConsumerGroupCommand

2017-01-10 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian updated KAFKA-3853:
---
Status: Patch Available  (was: In Progress)

> Report offsets for empty groups in ConsumerGroupCommand
> ---
>
> Key: KAFKA-3853
> URL: https://issues.apache.org/jira/browse/KAFKA-3853
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, tools
>Reporter: Jason Gustafson
>Assignee: Vahid Hashemian
>  Labels: kip
> Fix For: 0.10.2.0
>
>
> We ought to be able to display offsets for groups which either have no active 
> members or which are not using group management. The owner column can be left 
> empty or set to "N/A". If a group is active, I'm not sure it would make sense 
> to report all offsets, in particular when partitions are unassigned, but if 
> it seems problematic to do so, we could enable the behavior with a flag (e.g. 
> --include-unassigned).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] KIP-109: Old Consumer Deprecation

2017-01-10 Thread Gwen Shapira
+1

On Mon, Jan 9, 2017 at 8:58 AM, Vahid S Hashemian
 wrote:
> Happy Monday,
>
> I'd like to thank everyone who participated in the discussion around this
> KIP and shared their opinion.
>
> The only concern that was raised was not having a defined migration plan
> yet for existing users of the old consumer.
> I hope that responses to this concern (on the discussion thread) have been
> satisfactory.
>
> Given the short time we have until the 0.10.2.0 cut-off date I'd like to
> start voting on this KIP.
>
> KIP:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-109%3A+Old+Consumer+Deprecation
> Discussion thread:
> https://www.mail-archive.com/dev@kafka.apache.org/msg63427.html
>
> Thanks.
> --Vahid
>
>



-- 
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog


[jira] [Updated] (KAFKA-4604) Gradle Eclipse plugin creates projects for non project subfolders

2017-01-10 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-4604:
-
Assignee: Dhwani Katagade

> Gradle Eclipse plugin creates projects for non project subfolders
> -
>
> Key: KAFKA-4604
> URL: https://issues.apache.org/jira/browse/KAFKA-4604
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.11.0.0
> Environment: Tried with Gradle 3.2.1 and Eclipse Neon
>Reporter: Dhwani Katagade
>Assignee: Dhwani Katagade
>Priority: Minor
>  Labels: build, easyfix, patch
>
> Running the command *./gradlew eclipse* generates .project and .classpath 
> files for all projects. But it also generates these files for the root 
> project folder and the connect subfolder that holds the 4 connector projects 
> even though these folders are not actual project folders.
> The unnecessary connect project is benign, but the unnecessary kafka project 
> created for the root folder has a side effect. The root folder has a bin 
> directory that holds some scripts. When a _Clean all projects_ is done in 
> Eclipse, it cleans up the scripts in the bin directory. These have to be 
> restored by running *git checkout \-\- *. This same could become a 
> problem for the connect project as well if tomorrow we place some files under 
> connect/bin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4604) Gradle Eclipse plugin creates projects for non project subfolders

2017-01-10 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15816000#comment-15816000
 ] 

Guozhang Wang commented on KAFKA-4604:
--

BTW I have just assigned it to you.

> Gradle Eclipse plugin creates projects for non project subfolders
> -
>
> Key: KAFKA-4604
> URL: https://issues.apache.org/jira/browse/KAFKA-4604
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.11.0.0
> Environment: Tried with Gradle 3.2.1 and Eclipse Neon
>Reporter: Dhwani Katagade
>Priority: Minor
>  Labels: build, easyfix, patch
>
> Running the command *./gradlew eclipse* generates .project and .classpath 
> files for all projects. But it also generates these files for the root 
> project folder and the connect subfolder that holds the 4 connector projects 
> even though these folders are not actual project folders.
> The unnecessary connect project is benign, but the unnecessary kafka project 
> created for the root folder has a side effect. The root folder has a bin 
> directory that holds some scripts. When a _Clean all projects_ is done in 
> Eclipse, it cleans up the scripts in the bin directory. These have to be 
> restored by running *git checkout \-\- *. This same could become a 
> problem for the connect project as well if tomorrow we place some files under 
> connect/bin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2305: MINOR: Replacing for with foreach loop in stream t...

2017-01-10 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2305


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk8 #1165

2017-01-10 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] MINOR: rename SessionStore.findSessionsToMerge to findSessions

--
[...truncated 5317 lines...]

kafka.api.SaslScramSslEndToEndAuthorizationTest > testNoGroupAcl PASSED

kafka.api.SaslScramSslEndToEndAuthorizationTest > testNoProduceWithDescribeAcl 
STARTED

kafka.api.SaslScramSslEndToEndAuthorizationTest > testNoProduceWithDescribeAcl 
PASSED

kafka.api.SaslScramSslEndToEndAuthorizationTest > 
testProduceConsumeViaSubscribe STARTED

kafka.api.SaslScramSslEndToEndAuthorizationTest > 
testProduceConsumeViaSubscribe PASSED

kafka.api.SaslScramSslEndToEndAuthorizationTest > 
testNoProduceWithoutDescribeAcl STARTED

kafka.api.SaslScramSslEndToEndAuthorizationTest > 
testNoProduceWithoutDescribeAcl PASSED

kafka.api.UserClientIdQuotaTest > testProducerConsumerOverrideUnthrottled 
STARTED

kafka.api.UserClientIdQuotaTest > testProducerConsumerOverrideUnthrottled PASSED

kafka.api.UserClientIdQuotaTest > testThrottledProducerConsumer STARTED

kafka.api.UserClientIdQuotaTest > testThrottledProducerConsumer PASSED

kafka.api.UserClientIdQuotaTest > testQuotaOverrideDelete STARTED

kafka.api.UserClientIdQuotaTest > testQuotaOverrideDelete PASSED

kafka.api.PlaintextConsumerTest > testEarliestOrLatestOffsets STARTED

kafka.api.PlaintextConsumerTest > testEarliestOrLatestOffsets PASSED

kafka.api.PlaintextConsumerTest > testPartitionsForAutoCreate STARTED

kafka.api.PlaintextConsumerTest > testPartitionsForAutoCreate PASSED

kafka.api.PlaintextConsumerTest > testShrinkingTopicSubscriptions STARTED

kafka.api.PlaintextConsumerTest > testShrinkingTopicSubscriptions PASSED

kafka.api.PlaintextConsumerTest > testMaxPollIntervalMs STARTED

kafka.api.PlaintextConsumerTest > testMaxPollIntervalMs PASSED

kafka.api.PlaintextConsumerTest > testOffsetsForTimes STARTED

kafka.api.PlaintextConsumerTest > testOffsetsForTimes PASSED

kafka.api.PlaintextConsumerTest > testSubsequentPatternSubscription STARTED

kafka.api.PlaintextConsumerTest > testSubsequentPatternSubscription PASSED

kafka.api.PlaintextConsumerTest > testAsyncCommit STARTED

kafka.api.PlaintextConsumerTest > testAsyncCommit PASSED

kafka.api.PlaintextConsumerTest > testLowMaxFetchSizeForRequestAndPartition 
STARTED

kafka.api.PlaintextConsumerTest > testLowMaxFetchSizeForRequestAndPartition 
PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerSessionTimeoutOnStopPolling 
STARTED
ERROR: Could not install GRADLE_2_4_RC_2_HOME
java.lang.NullPointerException

kafka.api.PlaintextConsumerTest > testMultiConsumerSessionTimeoutOnStopPolling 
PASSED

kafka.api.PlaintextConsumerTest > testMaxPollIntervalMsDelayInRevocation STARTED

kafka.api.PlaintextConsumerTest > testMaxPollIntervalMsDelayInRevocation PASSED

kafka.api.PlaintextConsumerTest > testPartitionsForInvalidTopic STARTED

kafka.api.PlaintextConsumerTest > testPartitionsForInvalidTopic PASSED

kafka.api.PlaintextConsumerTest > testPauseStateNotPreservedByRebalance STARTED

kafka.api.PlaintextConsumerTest > testPauseStateNotPreservedByRebalance PASSED

kafka.api.PlaintextConsumerTest > 
testFetchHonoursFetchSizeIfLargeRecordNotFirst STARTED

kafka.api.PlaintextConsumerTest > 
testFetchHonoursFetchSizeIfLargeRecordNotFirst PASSED

kafka.api.PlaintextConsumerTest > testSeek STARTED

kafka.api.PlaintextConsumerTest > testSeek PASSED

kafka.api.PlaintextConsumerTest > testPositionAndCommit STARTED

kafka.api.PlaintextConsumerTest > testPositionAndCommit PASSED

kafka.api.PlaintextConsumerTest > 
testFetchRecordLargerThanMaxPartitionFetchBytes STARTED

kafka.api.PlaintextConsumerTest > 
testFetchRecordLargerThanMaxPartitionFetchBytes PASSED

kafka.api.PlaintextConsumerTest > testUnsubscribeTopic STARTED

kafka.api.PlaintextConsumerTest > testUnsubscribeTopic PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerSessionTimeoutOnClose STARTED

kafka.api.PlaintextConsumerTest > testMultiConsumerSessionTimeoutOnClose PASSED

kafka.api.PlaintextConsumerTest > testFetchRecordLargerThanFetchMaxBytes STARTED

kafka.api.PlaintextConsumerTest > testFetchRecordLargerThanFetchMaxBytes PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerDefaultAssignment STARTED

kafka.api.PlaintextConsumerTest > testMultiConsumerDefaultAssignment PASSED

kafka.api.PlaintextConsumerTest > testAutoCommitOnClose STARTED

kafka.api.PlaintextConsumerTest > testAutoCommitOnClose PASSED

kafka.api.PlaintextConsumerTest > testListTopics STARTED

kafka.api.PlaintextConsumerTest > testListTopics PASSED

kafka.api.PlaintextConsumerTest > testExpandingTopicSubscriptions STARTED

kafka.api.PlaintextConsumerTest > testExpandingTopicSubscriptions PASSED

kafka.api.PlaintextConsumerTest > testInterceptors STARTED

kafka.api.PlaintextConsumerTest > testInterceptors PASSED

kafka.api.PlaintextConsumerTest > testPatternUnsubscription STARTED

kafka.api.PlaintextConsumerTest > testPatternUnsubscription PASSED

kafka.api.Plai

[jira] [Commented] (KAFKA-4604) Gradle Eclipse plugin creates projects for non project subfolders

2017-01-10 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15815872#comment-15815872
 ] 

Guozhang Wang commented on KAFKA-4604:
--

Just curious is it only an issue for `ecplise` or it is also for `idea` 
intellij as well?

> Gradle Eclipse plugin creates projects for non project subfolders
> -
>
> Key: KAFKA-4604
> URL: https://issues.apache.org/jira/browse/KAFKA-4604
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.11.0.0
> Environment: Tried with Gradle 3.2.1 and Eclipse Neon
>Reporter: Dhwani Katagade
>Priority: Minor
>  Labels: build, easyfix, patch
>
> Running the command *./gradlew eclipse* generates .project and .classpath 
> files for all projects. But it also generates these files for the root 
> project folder and the connect subfolder that holds the 4 connector projects 
> even though these folders are not actual project folders.
> The unnecessary connect project is benign, but the unnecessary kafka project 
> created for the root folder has a side effect. The root folder has a bin 
> directory that holds some scripts. When a _Clean all projects_ is done in 
> Eclipse, it cleans up the scripts in the bin directory. These have to be 
> restored by running *git checkout \-\- *. This same could become a 
> problem for the connect project as well if tomorrow we place some files under 
> connect/bin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #1820

2017-01-10 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] MINOR: rename SessionStore.findSessionsToMerge to findSessions

--
[...truncated 18210 lines...]
org.apache.kafka.streams.processor.internals.StreamsMetadataStateTest > 
shouldGetInstanceWithKeyAndCustomPartitioner PASSED

org.apache.kafka.streams.processor.internals.StreamsMetadataStateTest > 
shouldThrowIfStoreNameIsNullOnGetAllInstancesWithStore STARTED

org.apache.kafka.streams.processor.internals.StreamsMetadataStateTest > 
shouldThrowIfStoreNameIsNullOnGetAllInstancesWithStore PASSED

org.apache.kafka.streams.processor.internals.AbstractTaskTest > 
shouldThrowProcessorStateExceptionOnInitializeOffsetsWhenAuthorizationException 
STARTED

org.apache.kafka.streams.processor.internals.AbstractTaskTest > 
shouldThrowProcessorStateExceptionOnInitializeOffsetsWhenAuthorizationException 
PASSED

org.apache.kafka.streams.processor.internals.AbstractTaskTest > 
shouldThrowProcessorStateExceptionOnInitializeOffsetsWhenKafkaException STARTED

org.apache.kafka.streams.processor.internals.AbstractTaskTest > 
shouldThrowProcessorStateExceptionOnInitializeOffsetsWhenKafkaException PASSED

org.apache.kafka.streams.processor.internals.AbstractTaskTest > 
shouldThrowWakeupExceptionOnInitializeOffsetsWhenWakeupException STARTED

org.apache.kafka.streams.processor.internals.AbstractTaskTest > 
shouldThrowWakeupExceptionOnInitializeOffsetsWhenWakeupException PASSED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldHaveCompactionPropSetIfSupplied STARTED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldHaveCompactionPropSetIfSupplied PASSED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldThrowIfNameIsNull STARTED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldThrowIfNameIsNull PASSED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldConfigureRetentionMsWithAdditionalRetentionWhenCompactAndDelete STARTED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldConfigureRetentionMsWithAdditionalRetentionWhenCompactAndDelete PASSED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldBeCompactedIfCleanupPolicyCompactOrCompactAndDelete STARTED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldBeCompactedIfCleanupPolicyCompactOrCompactAndDelete PASSED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldNotBeCompactedWhenCleanupPolicyIsDelete STARTED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldNotBeCompactedWhenCleanupPolicyIsDelete PASSED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldHavePropertiesSuppliedByUser STARTED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldHavePropertiesSuppliedByUser PASSED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldUseCleanupPolicyFromConfigIfSupplied STARTED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldUseCleanupPolicyFromConfigIfSupplied PASSED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldNotConfigureRetentionMsWhenCompact STARTED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldNotConfigureRetentionMsWhenCompact PASSED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldNotConfigureRetentionMsWhenDelete STARTED

org.apache.kafka.streams.processor.internals.InternalTopicConfigTest > 
shouldNotConfigureRetentionMsWhenDelete PASSED

org.apache.kafka.streams.processor.internals.StandbyTaskTest > 
shouldNotThrowUnsupportedOperationExceptionWhenInitializingStateStores STARTED

org.apache.kafka.streams.processor.internals.StandbyTaskTest > 
shouldNotThrowUnsupportedOperationExceptionWhenInitializingStateStores PASSED

org.apache.kafka.streams.processor.internals.StandbyTaskTest > 
testStorePartitions STARTED

org.apache.kafka.streams.processor.internals.StandbyTaskTest > 
testStorePartitions PASSED

org.apache.kafka.streams.processor.internals.StandbyTaskTest > testUpdateKTable 
STARTED

org.apache.kafka.streams.processor.internals.StandbyTaskTest > testUpdateKTable 
PASSED

org.apache.kafka.streams.processor.internals.StandbyTaskTest > 
testUpdateNonPersistentStore STARTED

org.apache.kafka.streams.processor.internals.StandbyTaskTest > 
testUpdateNonPersistentStore PASSED

org.apache.kafka.streams.processor.internals.StandbyTaskTest > testUpdate 
STARTED

org.apache.kafka.streams.processor.internals.StandbyTaskTest > testUpdate PASSED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
shouldThrowStreamsExceptionOnFlushIfASendFailed STARTED

org.apache.kafka.streams.processor.internals.RecordCollectorTest > 
shouldThrowStreamsExceptionOnFlushIfASendFaile

[GitHub] kafka pull request #2340: MINOR: ConfigDef `parseType` exception message upd...

2017-01-10 Thread Kamal15
GitHub user Kamal15 opened a pull request:

https://github.com/apache/kafka/pull/2340

MINOR: ConfigDef `parseType` exception message updated.

- When Kafka configurations are provided programmatically, for the below 
configuration incorrect error message gets printed.
* configuration: kafkaProps.put("zookeeper.session.timeout.ms", 
60_000L); - expects INTEGER value.
* Error Msg: "Invalid value 6 for configuration 
zookeeper.session.timeout.ms : Expected value to be an number."
* Long is also a number which misleading the error message.
- Minor code cleanup and unwanted local variables are removed.
- Java doc dangling imports are updated.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Kamal15/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2340.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2340






---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Comment Edited] (KAFKA-1251) Add metrics to the producer

2017-01-10 Thread Andrey (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15815649#comment-15815649
 ] 

Andrey edited comment on KAFKA-1251 at 1/10/17 5:51 PM:


Hi,

Does anyone knows why "counter" metrics exposed as "rate"? With such 
aggregation it's not possible to correctly render over different timespans. See 
implementation in Kibana/Elastic:
https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-pipeline-derivative-aggregation.html

General idea: 
1) expose metrics as counter.
2) render tools take constantly growing number and calculate derivative over 
requested interval (10s, 1h, 1d, 1w).


was (Author: dernasherbrezon):
Hi,

Does anyone knows why "counter" metrics exposed as "rate"? With such 
aggregation it's not possible to correctly render over different timespans. See 
implementation in Kibana/Elastic:
https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-pipeline-derivative-aggregation.html

General idea: 
1) expose metrics as counter.
2) render tools take constantly growing number and calculate derivative over 
requested interval.

> Add metrics to the producer
> ---
>
> Key: KAFKA-1251
> URL: https://issues.apache.org/jira/browse/KAFKA-1251
> Project: Kafka
>  Issue Type: Sub-task
>  Components: producer 
>Reporter: Jay Kreps
>Assignee: Jay Kreps
> Attachments: KAFKA-1251.patch, KAFKA-1251.patch, KAFKA-1251.patch, 
> KAFKA-1251.patch, KAFKA-1251_2014-03-19_10:19:27.patch, 
> KAFKA-1251_2014-03-19_10:29:05.patch, KAFKA-1251_2014-03-19_17:30:32.patch, 
> KAFKA-1251_2014-03-25_17:07:39.patch, metrics.png, new-producer-metrics.png
>
>
> Currently there are no metrics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1251) Add metrics to the producer

2017-01-10 Thread Andrey (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15815649#comment-15815649
 ] 

Andrey commented on KAFKA-1251:
---

Hi,

Does anyone knows why "counter" metrics exposed as "rate"? With such 
aggregation it's not possible to correctly render over different timespans. See 
implementation in Kibana/Elastic:
https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-pipeline-derivative-aggregation.html

General idea: 
1) expose metrics as counter.
2) render tools take constantly growing number and calculate derivative over 
requested interval.

> Add metrics to the producer
> ---
>
> Key: KAFKA-1251
> URL: https://issues.apache.org/jira/browse/KAFKA-1251
> Project: Kafka
>  Issue Type: Sub-task
>  Components: producer 
>Reporter: Jay Kreps
>Assignee: Jay Kreps
> Attachments: KAFKA-1251.patch, KAFKA-1251.patch, KAFKA-1251.patch, 
> KAFKA-1251.patch, KAFKA-1251_2014-03-19_10:19:27.patch, 
> KAFKA-1251_2014-03-19_10:29:05.patch, KAFKA-1251_2014-03-19_17:30:32.patch, 
> KAFKA-1251_2014-03-25_17:07:39.patch, metrics.png, new-producer-metrics.png
>
>
> Currently there are no metrics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-4506) Refactor AbstractRequest to contain version information

2017-01-10 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-4506.
--
Resolution: Duplicate
  Reviewer: Ewen Cheslack-Postava

This was absorbed into KAFKA-4507 because they were difficult to disentangle.

> Refactor AbstractRequest to contain version information
> ---
>
> Key: KAFKA-4506
> URL: https://issues.apache.org/jira/browse/KAFKA-4506
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
>
> Refactor AbstractRequest to contain version information.  Remove some client 
> code which implicitly assumes that the latest version of each message will 
> always be processed and used.  Always match the API version in request 
> headers to the request version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4609) KTable/KTable join followed by groupBy and aggregate/count can result in incorrect results

2017-01-10 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-4609:
-
Labels: architecture  (was: )

> KTable/KTable join followed by groupBy and aggregate/count can result in 
> incorrect results
> --
>
> Key: KAFKA-4609
> URL: https://issues.apache.org/jira/browse/KAFKA-4609
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.1.1, 0.10.2.0
>Reporter: Damian Guy
>Assignee: Damian Guy
>  Labels: architecture
>
> When caching is enabled, KTable/KTable joins can result in duplicate values 
> being emitted. This will occur if there were updates to the same key in both 
> tables. Each table is flushed independently, and each table will trigger the 
> join, so you get two results for the same key. 
> If we subsequently perform a groupBy and then aggregate operation we will now 
> process these duplicates resulting in incorrect aggregated values. For 
> example count will be double the value it should be.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4609) KTable/KTable join followed by groupBy and aggregate/count can result in incorrect results

2017-01-10 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-4609:
-
Component/s: streams

> KTable/KTable join followed by groupBy and aggregate/count can result in 
> incorrect results
> --
>
> Key: KAFKA-4609
> URL: https://issues.apache.org/jira/browse/KAFKA-4609
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.1.1, 0.10.2.0
>Reporter: Damian Guy
>Assignee: Damian Guy
>
> When caching is enabled, KTable/KTable joins can result in duplicate values 
> being emitted. This will occur if there were updates to the same key in both 
> tables. Each table is flushed independently, and each table will trigger the 
> join, so you get two results for the same key. 
> If we subsequently perform a groupBy and then aggregate operation we will now 
> process these duplicates resulting in incorrect aggregated values. For 
> example count will be double the value it should be.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2339: MINOR: rename SessionStore.findSessionsToMerge to ...

2017-01-10 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2339


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] KIP-98: Exactly Once Delivery and Transactional Messaging

2017-01-10 Thread Jason Gustafson
Hi All,

We've been putting some thought into the need to buffer fetched data in the
consumer in the READ_COMMITTED isolation mode and have a proposal to
address the concern. The basic idea is to introduce an index to keep track
of the aborted transactions. We use this index to return in each fetch a
list of the aborted transactions from the fetch range so that the consumer
can tell without any buffering whether a record set should be returned to
the user. Take a look and let us know what you think:
https://docs.google.com/document/d/1Rlqizmk7QCDe8qAnVW5e5X8rGvn6m2DCR3JR2yqwVjc/edit?usp=sharing
.

Thanks,
Jason

On Sun, Jan 8, 2017 at 9:32 PM, Jun Rao  wrote:

> Hi, Jason,
>
> 100. Yes, AppId level security is mainly for protecting the shared
> transaction log. We could also include AppId in produce request (not in
> message format) so that we could protect writes at the AppId level. I agree
> that we need to support prefix matching on AppId for applications like
> stream to use this conveniently.
>
> A couple of other comments.
>
> 122. Earlier, Becket asked for the use case of knowing the number of
> messages in a message set. One potential use case is KAFKA-4293. Currently,
> since we don't know the number of messages in a compressed set, to finish
> the iteration, we rely on catching EOF in the decompressor, which adds a
> bit overhead in the consumer.
>
> 123. I am wondering if the coordinator needs to add a "BEGIN transaction
> message" on a BeginTxnRequest
>  0wSw9ra8/edit#heading=h.lbrw4crdnl5>.
> Could we just wait until an AddPartitionsToTxnRequest
>  0wSw9ra8/edit#heading=h.r6klddrx9ibz>
> ?
>
> Thanks,
>
> Jun
>
>
> On Thu, Jan 5, 2017 at 11:05 AM, Jason Gustafson 
> wrote:
>
> > Hi Jun,
> >
> > Let me start picking off a some of your questions (we're giving the
> shadow
> > log suggestion a bit more thought before responding).
> >
> > 100. Security: It seems that if an app is mistakenly configured with the
> > > appId of an existing producer, it can take over the pid and prevent the
> > > existing app from publishing. So, I am wondering if it makes sense to
> add
> > > ACLs at the TransactionResource level just like we do for
> > > ConsumerGroupResource. So, a user can only do transactions under a
> > > particular appId if he/she has the write permission to the
> > > TransactionResource
> > > associated with the appId.
> >
> >
> > I think this makes sense in general. There are a couple points worth
> > mentioning:
> >
> > 1. Because we only use the AppID in requests to the transaction
> > coordinator, that's the only point at which we can do authorization in
> the
> > current proposal. It is possible for a malicious producer to hijack
> another
> > producer's PID and use it to write data. It wouldn't be able to commit or
> > abort transactions, but it could effectively fence the legitimate
> producer
> > from a partition by forcing an epoch bump. We could add the AppID to the
> > ProduceRequest schema, but we would still need to protect its binding to
> > the PID somehow. This is one argument in favor of dropping the PID and
> > using the AppID in the log message format. However, there are still ways
> in
> > the current proposal to give better protection if we added the AppID
> > authorization at the transaction coordinator as you suggest. Note that a
> > malicious producer would have to be authorized to write to the same
> topics
> > used by the transactional producer. So one way to protect those topics is
> > to only allow write access by the authorized transactional producers. The
> > transactional producers could still interfere with each other, but
> perhaps
> > that's a smaller concern (it's similar in effect to the limitations of
> > consumer group authorization).
> >
> > 2. It's a bit unfortunate that we don't have something like the
> consumer's
> > groupId to use for authorization. The AppID is really more of an instance
> > ID (we were reluctant to introduce any formal notion of a producer
> group).
> > I guess distributed applications could use a common prefix and a wildcard
> > authorization policy. I don't think we currently support general
> wildcards,
> > but that might be helpful for this use case.
> >
> > -Jason
> >
> > On Wed, Jan 4, 2017 at 12:55 PM, Jay Kreps  wrote:
> >
> > > Hey Jun,
> > >
> > > We had a proposal like this previously. The suppression scheme was
> > slightly
> > > different. Rather than than attempting to recopy or swap, there was
> > instead
> > > an aborted offset index maintained along with each segment containing a
> > > sequential list of aborted offsets. The filtering would happen at fetch
> > > time and would just ensure that fetch requests never span an aborted
> > > transaction. That is, if you did a fetch request which would include
> > > offsets 7,8,9,10,11, but offsets 7 and 10 appears in the inde

Re: [VOTE] KIP-109: Old Consumer Deprecation

2017-01-10 Thread Vahid S Hashemian
Hi Ismael,

Sure, that sounds good to me. Thanks.

--Vahid 




From:   Ismael Juma 
To: dev@kafka.apache.org
Date:   01/10/2017 03:26 AM
Subject:Re: [VOTE] KIP-109: Old Consumer Deprecation
Sent by:isma...@gmail.com



Thanks Vahid. I think there are 2 aspects to this KIP:

1. Deprecating the old consumers
2. When it should be done

I think everyone agrees that we should deprecate it, but there is a
difference of opinions on the timing. I think it may be best not to rush
it, so I'm +1 on this for the release after 0.10.2.0 (which means the PR
could be merged after the 0.10.2 branch is created in a couple of weeks).

Ismael

On Mon, Jan 9, 2017 at 4:58 PM, Vahid S Hashemian 
 wrote:

> Happy Monday,
>
> I'd like to thank everyone who participated in the discussion around 
this
> KIP and shared their opinion.
>
> The only concern that was raised was not having a defined migration plan
> yet for existing users of the old consumer.
> I hope that responses to this concern (on the discussion thread) have 
been
> satisfactory.
>
> Given the short time we have until the 0.10.2.0 cut-off date I'd like to
> start voting on this KIP.
>
> KIP:
> 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-109%3A+Old+Consumer+
> Deprecation
> Discussion thread:
> https://www.mail-archive.com/dev@kafka.apache.org/msg63427.html
>
> Thanks.
> --Vahid
>
>
>






[GitHub] kafka pull request #2339: MINOR: rename SessionStore.findSessionsToMerge to ...

2017-01-10 Thread dguy
GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/2339

MINOR: rename SessionStore.findSessionsToMerge to findSessions

Rename `SessionStore.findSessionsToMerge` to `findSessions`

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka minor-findsession-rename

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2339.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2339


commit a5b4bad3958c8ee51c85cc3bb4cc2fed0ae29b3c
Author: Damian Guy 
Date:   2017-01-10T16:36:57Z

rename findSessionsToMerge -> findSessions




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk8 #1164

2017-01-10 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-3751; SASL/SCRAM implementation

--
[...truncated 19810 lines...]
org.apache.kafka.connect.runtime.WorkerTest > testStopInvalidConnector PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testPollsInBackground STARTED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testPollsInBackground PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testCommit STARTED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testCommit PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testCommitFailure 
STARTED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testCommitFailure 
PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testCommitSuccessFollowedByFailure STARTED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testCommitSuccessFollowedByFailure PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testCommitConsumerFailure STARTED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testCommitConsumerFailure PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testCommitTimeout 
STARTED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testCommitTimeout 
PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testAssignmentPauseResume STARTED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testAssignmentPauseResume PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testRewind STARTED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > testRewind PASSED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testRewindOnRebalanceDuringPoll STARTED

org.apache.kafka.connect.runtime.WorkerSinkTaskThreadedTest > 
testRewindOnRebalanceDuringPoll PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testLeaderPerformAssignmentSingleTaskConnectors STARTED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testLeaderPerformAssignmentSingleTaskConnectors PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testNormalJoinGroupFollower STARTED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testNormalJoinGroupFollower PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testMetadata STARTED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testMetadata PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testLeaderPerformAssignment1 STARTED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testLeaderPerformAssignment1 PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testLeaderPerformAssignment2 STARTED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testLeaderPerformAssignment2 PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testJoinLeaderCannotAssign STARTED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testJoinLeaderCannotAssign PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testRejoinGroup STARTED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testRejoinGroup PASSED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testNormalJoinGroupLeader STARTED

org.apache.kafka.connect.runtime.distributed.WorkerCoordinatorTest > 
testNormalJoinGroupLeader PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testPutConnectorConfig STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnectorFailedBasicValidation STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnectorFailedBasicValidation PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnector STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnector PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnectorAlreadyExists STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnectorAlreadyExists PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testDestroyConnector STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testDestroyConnector PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartConnector STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartConnector PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testJoinAssignment STAR

Build failed in Jenkins: kafka-trunk-jdk8 #1163

2017-01-10 Thread Apache Jenkins Server
See 

Changes:

[ismael] MINOR: Remove unnecessary semi-colons

--
[...truncated 33841 lines...]

org.apache.kafka.streams.KafkaStreamsTest > testCleanup STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCleanup PASSED

org.apache.kafka.streams.KafkaStreamsTest > testStartAndClose STARTED

org.apache.kafka.streams.KafkaStreamsTest > testStartAndClose PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCloseIsIdempotent STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCloseIsIdempotent PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCannotCleanupWhileRunning 
STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCannotCleanupWhileRunning PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCannotStartTwice STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCannotStartTwice PASSED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegion[0] STARTED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegion[0] PASSED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegion[1] STARTED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegion[1] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryState[0] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryState[0] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable[0] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable[0] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance[0] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance[0] FAILED
java.lang.AssertionError: Condition not met within timeout 6. Did not 
receive 1 number of records
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:259)
at 
org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinValuesRecordsReceived(IntegrationTestUtils.java:253)
at 
org.apache.kafka.streams.integration.QueryableStateIntegrationTest.waitUntilAtLeastNumRecordProcessed(QueryableStateIntegrationTest.java:669)
at 
org.apache.kafka.streams.integration.QueryableStateIntegrationTest.queryOnRebalance(QueryableStateIntegrationTest.java:350)

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses[0] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses[0] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryState[1] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryState[1] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable[1] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable[1] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance[1] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance[1] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses[1] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses[1] PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testShouldReadFromRegexAndNamedTopics STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testShouldReadFromRegexAndNamedTopics PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenCreated STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenCreated PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testMultipleConsumersCanReadFromPartitionedTopic STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testMultipleConsumersCanReadFromPartitionedTopic PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenDeleted STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenDeleted PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testNoMessagesSentExceptionFromOverlappingPatterns STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testNoMessagesSentExceptionFromOverlappingPatterns PASSED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > 
shouldCorrectl

[jira] [Commented] (KAFKA-4125) Provide low-level Processor API meta data in DSL layer

2017-01-10 Thread Bill Bejeck (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15815003#comment-15815003
 ] 

Bill Bejeck commented on KAFKA-4125:


[~guozhang] [~mjsax]  Thanks for the response.  Sure I'll give the KIP proposal 
a shot.

> Provide low-level Processor API meta data in DSL layer
> --
>
> Key: KAFKA-4125
> URL: https://issues.apache.org/jira/browse/KAFKA-4125
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Guozhang Wang
>Priority: Minor
>
> For Processor API, user can get meta data like record offset, timestamp etc 
> via the provided {{Context}} object. It might be useful to allow uses to 
> access this information in DSL layer, too.
> The idea would be, to do it "the Flink way", ie, by providing
> RichFunctions; {{mapValue()}} for example.
> Is takes a {{ValueMapper}} that only has method
> {noformat}
> V2 apply(V1 value);
> {noformat}
> Thus, you cannot get any meta data within apply (it's completely "blind").
> We would add two more interfaces: {{RichFunction}} with a method
> {{open(Context context)}} and
> {noformat}
> RichValueMapper extends ValueMapper, RichFunction
> {noformat}
> This way, the user can chose to implement Rich- or Standard-function and
> we do not need to change existing APIs. Both can be handed into
> {{KStream.mapValues()}} for example. Internally, we check if a Rich
> function is provided, and if yes, hand in the {{Context}} object once, to
> make it available to the user who can now access it within {{apply()}} -- or
> course, the user must set a member variable in {{open()}} to hold the
> reference to the Context object.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3751) Add support for SASL/SCRAM mechanisms

2017-01-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15814919#comment-15814919
 ] 

ASF GitHub Bot commented on KAFKA-3751:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2086


> Add support for SASL/SCRAM mechanisms
> -
>
> Key: KAFKA-3751
> URL: https://issues.apache.org/jira/browse/KAFKA-3751
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.10.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.10.2.0
>
>
> Salted Challenge Response Authentication Mechanism (SCRAM) provides secure 
> authentication and is increasingly being adopted as an alternative to 
> Digest-MD5 which is now obsolete. SCRAM is described in the RFC 
> [https://tools.ietf.org/html/rfc5802]. It will be good to add support for 
> SCRAM-SHA-256 ([https://tools.ietf.org/html/rfc7677]) as a SASL mechanism for 
> Kafka.
> See 
> [KIP-84|https://cwiki.apache.org/confluence/display/KAFKA/KIP-84%3A+Support+SASL+SCRAM+mechanisms]
>  for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2086: KAFKA-3751: SASL/SCRAM implementation

2017-01-10 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2086


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-3751) Add support for SASL/SCRAM mechanisms

2017-01-10 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-3751:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 2086
[https://github.com/apache/kafka/pull/2086]

> Add support for SASL/SCRAM mechanisms
> -
>
> Key: KAFKA-3751
> URL: https://issues.apache.org/jira/browse/KAFKA-3751
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.10.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.10.2.0
>
>
> Salted Challenge Response Authentication Mechanism (SCRAM) provides secure 
> authentication and is increasingly being adopted as an alternative to 
> Digest-MD5 which is now obsolete. SCRAM is described in the RFC 
> [https://tools.ietf.org/html/rfc5802]. It will be good to add support for 
> SCRAM-SHA-256 ([https://tools.ietf.org/html/rfc7677]) as a SASL mechanism for 
> Kafka.
> See 
> [KIP-84|https://cwiki.apache.org/confluence/display/KAFKA/KIP-84%3A+Support+SASL+SCRAM+mechanisms]
>  for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] KIP-102 - Add close with timeout for consumers

2017-01-10 Thread Rajini Sivaram
Yes, Thank you, Ismael.

Many thanks to everyone for the feedback and votes. The vote has passed
with 5 binding (Neha, Ewen, Sriram, Ismael, Jason) and three non-binding
(Edo, Apurva, me) +1s. I will update the KIP pages.

Regards,

Rajini


On Tue, Jan 10, 2017 at 11:33 AM, Ismael Juma  wrote:

> Rajini, maybe we can close the vote?
>
> Ismael
>
> On Fri, Jan 6, 2017 at 7:48 PM, Apurva Mehta  wrote:
>
> > +1 (non-binding).
> >
> > On Fri, Jan 6, 2017 at 9:24 AM, Jason Gustafson 
> > wrote:
> >
> > > Thanks for the KIP. +1
> > >
> > > On Fri, Jan 6, 2017 at 2:26 AM, Ismael Juma  wrote:
> > >
> > > > Thanks for the KIP, +1 (binding).
> > > >
> > > > As I said in the discussion thread, I'm not too sure about the
> > hardcoded
> > > 30
> > > > seconds timeout for the no-args `close` method. Still, it's an
> > > improvement
> > > > over what is in trunk at the moment and I don't have a good
> alternative
> > > > given that request.timeout is pretty long by default (5 minutes).
> > > >
> > > > Ismael
> > > >
> > > > On Thu, Jan 5, 2017 at 10:07 PM, Rajini Sivaram <
> > rajinisiva...@gmail.com
> > > >
> > > > wrote:
> > > >
> > > > > Hi all,
> > > > >
> > > > >
> > > > > I would like to start the voting process for *KIP-102 - Add close
> > with
> > > > > timeout for consumers*:
> > > > >
> > > > >
> > > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > > 102+-+Add+close+with+timeout+for+consumers
> > > > >
> > > > >
> > > > >
> > > > > This KIP adds a new close method with a timeout for consumers
> similar
> > > to
> > > > > the close method in the producer. As described in the discussion
> > thread
> > > > >  > > mbox/%3cCAG_+
> > > > > n9us5ohthwmyai9pz4s2j62fmils2ufj8oie9jpmyaf...@mail.gmail.com%3e>,
> > > > > the changes are only in the close code path and hence the impact is
> > not
> > > > too
> > > > > big. The existing close() method without a timeout will use a
> default
> > > > > timeout of 30 seconds.
> > > > >
> > > > >
> > > > > Thank you
> > > > >
> > > > >
> > > > > Regards,
> > > > >
> > > > >
> > > > > Rajini
> > > > >
> > > >
> > >
> >
>


[GitHub] kafka pull request #2326: MINOR: Various small scala cleanups

2017-01-10 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2326


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [VOTE] KIP-102 - Add close with timeout for consumers

2017-01-10 Thread Ismael Juma
Rajini, maybe we can close the vote?

Ismael

On Fri, Jan 6, 2017 at 7:48 PM, Apurva Mehta  wrote:

> +1 (non-binding).
>
> On Fri, Jan 6, 2017 at 9:24 AM, Jason Gustafson 
> wrote:
>
> > Thanks for the KIP. +1
> >
> > On Fri, Jan 6, 2017 at 2:26 AM, Ismael Juma  wrote:
> >
> > > Thanks for the KIP, +1 (binding).
> > >
> > > As I said in the discussion thread, I'm not too sure about the
> hardcoded
> > 30
> > > seconds timeout for the no-args `close` method. Still, it's an
> > improvement
> > > over what is in trunk at the moment and I don't have a good alternative
> > > given that request.timeout is pretty long by default (5 minutes).
> > >
> > > Ismael
> > >
> > > On Thu, Jan 5, 2017 at 10:07 PM, Rajini Sivaram <
> rajinisiva...@gmail.com
> > >
> > > wrote:
> > >
> > > > Hi all,
> > > >
> > > >
> > > > I would like to start the voting process for *KIP-102 - Add close
> with
> > > > timeout for consumers*:
> > > >
> > > >
> > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > 102+-+Add+close+with+timeout+for+consumers
> > > >
> > > >
> > > >
> > > > This KIP adds a new close method with a timeout for consumers similar
> > to
> > > > the close method in the producer. As described in the discussion
> thread
> > > >  > mbox/%3cCAG_+
> > > > n9us5ohthwmyai9pz4s2j62fmils2ufj8oie9jpmyaf...@mail.gmail.com%3e>,
> > > > the changes are only in the close code path and hence the impact is
> not
> > > too
> > > > big. The existing close() method without a timeout will use a default
> > > > timeout of 30 seconds.
> > > >
> > > >
> > > > Thank you
> > > >
> > > >
> > > > Regards,
> > > >
> > > >
> > > > Rajini
> > > >
> > >
> >
>


Re: [VOTE] KIP-103: Separation of Internal and External traffic

2017-01-10 Thread Ismael Juma
Thanks to everyone who voted and provided feedback. The vote has passed
with 5 binding votes (Sriram, Jun, Neha, Becket, Joel) and 4 non-binding
votes (Rajini, Colin, Tom, Roger).

I have updated the relevant wiki pages.

Ismael

On Mon, Jan 9, 2017 at 9:52 PM, Joel Koshy  wrote:

> +1
>
> On Fri, Jan 6, 2017 at 4:53 PM, Ismael Juma  wrote:
>
> > Hi all,
> >
> > Since a few people (including myself) felt that listener name was clearer
> > than protocol label, I updated the KIP to use that (as mentioned in the
> > discuss thread). Given that this is a minor change, I don't think we need
> > to restart the vote. If anyone objects to this change, please let me
> know.
> >
> > Thanks,
> > Ismael
> >
> > On Fri, Jan 6, 2017 at 6:58 PM, Colin McCabe  wrote:
> >
> > > Looks good.  +1 (non-binding).
> > >
> > > What do you think about changing "protocol label" to "listener key"?
> > >
> > > best,
> > > Colin
> > >
> > >
> > > On Fri, Jan 6, 2017, at 09:23, Neha Narkhede wrote:
> > > > +1
> > > >
> > > > On Fri, Jan 6, 2017 at 9:21 AM Jun Rao  wrote:
> > > >
> > > > > Hi, Ismael,
> > > > >
> > > > > Thanks for the KIP. +1
> > > > >
> > > > > Jun
> > > > >
> > > > > On Fri, Jan 6, 2017 at 2:51 AM, Ismael Juma 
> > wrote:
> > > > >
> > > > > > Hi all,
> > > > > >
> > > > > > As the discussion seems to have settled down, I would like to
> > > initiate
> > > > > the
> > > > > > voting process for KIP-103: Separation of Internal and External
> > > traffic:
> > > > > >
> > > > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > > > 103%3A+Separation+of+Internal+and+External+traffic
> > > > > >
> > > > > > The vote will run for a minimum of 72 hours.
> > > > > >
> > > > > > Thanks,
> > > > > > Ismael
> > > > > >
> > > > >
> > > > --
> > > > Thanks,
> > > > Neha
> > >
> >
>


Re: [VOTE] KIP-109: Old Consumer Deprecation

2017-01-10 Thread Ismael Juma
Thanks Vahid. I think there are 2 aspects to this KIP:

1. Deprecating the old consumers
2. When it should be done

I think everyone agrees that we should deprecate it, but there is a
difference of opinions on the timing. I think it may be best not to rush
it, so I'm +1 on this for the release after 0.10.2.0 (which means the PR
could be merged after the 0.10.2 branch is created in a couple of weeks).

Ismael

On Mon, Jan 9, 2017 at 4:58 PM, Vahid S Hashemian  wrote:

> Happy Monday,
>
> I'd like to thank everyone who participated in the discussion around this
> KIP and shared their opinion.
>
> The only concern that was raised was not having a defined migration plan
> yet for existing users of the old consumer.
> I hope that responses to this concern (on the discussion thread) have been
> satisfactory.
>
> Given the short time we have until the 0.10.2.0 cut-off date I'd like to
> start voting on this KIP.
>
> KIP:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-109%3A+Old+Consumer+
> Deprecation
> Discussion thread:
> https://www.mail-archive.com/dev@kafka.apache.org/msg63427.html
>
> Thanks.
> --Vahid
>
>
>


Re: [DISCUSS] KIP-110: Add Codec for ZStandard Compression

2017-01-10 Thread Dongjin Lee
Ismael,
I see. Then, I will share the benchmark code I used by tomorrow. Thanks for 
your guidance.
Best,Dongjin
-

Dongjin Lee

Software developer in Line+.
So interested in massive-scale machine learning.

facebook: www.facebook.com/dongjin.lee.kr
linkedin: kr.linkedin.com/in/dongjinleekr
github: github.com/dongjinleekr
twitter: www.twitter.com/dongjinleekr




On Tue, Jan 10, 2017 at 7:24 PM +0900, "Ismael Juma"  wrote:










Dongjin,

The KIP states:

"I compared the compressed size and compression time of 3 1kb-sized
messages (3102 bytes in total), with the Draft-implementation of ZStandard
Compression Codec and all currently available CompressionCodecs. All
elapsed times are the average of 20 trials."

But doesn't give any details of how this was implemented. Is the source
code available somewhere? Micro-benchmarking in the JVM is pretty tricky so
it needs verification before numbers can be trusted. A performance test
with kafka-producer-perf-test.sh would be nice to have as well, if possible.

Thanks,
Ismael

On Tue, Jan 10, 2017 at 7:44 AM, Dongjin Lee  wrote:

> Ismael,
>
> 1. Is the benchmark in the KIP page not enough? You mean we need a whole
> performance test using kafka-producer-perf-test.sh?
>
> 2. It seems like no major project is relying on it currently. However,
> after reviewing the code, I concluded that at least this project has a good
> test coverage. And for the problem of upstream tracking - although there is
> no significant update on ZStandard to judge this problem, it seems not bad.
> If required, I can take responsibility of the tracking for this library.
>
> Thanks,
> Dongjin
>
> On Tue, Jan 10, 2017 at 7:09 AM, Ismael Juma  wrote:
>
> > Thanks for posting the KIP, ZStandard looks like a nice improvement over
> > the existing compression algorithms. A couple of questions:
> >
> > 1. Can you please elaborate on the details of the benchmark?
> > 2. About https://github.com/luben/zstd-jni, can we rely on it? A few
> > things
> > to consider: are there other projects using it, does it have good test
> > coverage, are there performance tests, does it track upstream closely?
> >
> > Thanks,
> > Ismael
> >
> > On Fri, Jan 6, 2017 at 2:40 AM, Dongjin Lee  wrote:
> >
> > > Hi all,
> > >
> > > I've just posted a new KIP "KIP-110: Add Codec for ZStandard
> Compression"
> > > for
> > > discussion:
> > >
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > 110%3A+Add+Codec+for+ZStandard+Compression
> > >
> > > Please have a look when you are free.
> > >
> > > Best,
> > > Dongjin
> > >
> > > --
> > > *Dongjin Lee*
> > >
> > >
> > > *Software developer in Line+.So interested in massive-scale machine
> > > learning.facebook: www.facebook.com/dongjin.lee.kr
> > > linkedin:
> > > kr.linkedin.com/in/dongjinleekr
> > > github:
> > > github.com/dongjinleekr
> > > twitter: www.twitter.com/dongjinleekr
> > > *
> > >
> >
>
>
>
> --
> *Dongjin Lee*
>
>
> *Software developer in Line+.So interested in massive-scale machine
> learning.facebook: www.facebook.com/dongjin.lee.kr
> linkedin:
> kr.linkedin.com/in/dongjinleekr
> github:
> github.com/dongjinleekr
> twitter: www.twitter.com/dongjinleekr
> *
>







Re: [DISCUSS] KIP-109: Old Consumer Deprecation

2017-01-10 Thread Ismael Juma
Hi Ewen,

I agree that it would be good to clarify what we consider a message format
change.

The compaction tombstone was a message format change because it would mean
that `null` no longer meant delete for compacted topics, so there is a
semantics change as well (and the cleaner would have to be aware of this).

For something like a new compression algorithm, it could potentially be
done without a message format change because you have to opt-in (i..e don't
create topics with the new compression algorithm until your brokers and
clients support it). As you said though, bumping both message format
version and fetch/produce request/response versions does give a little more
flexibility and safety and it's nice to be able to specify that for message
format version X, a certain compression algorithm must be supported. Still,
down-conversion is costly enough that it may be worth avoiding unless we
can bundle the compression format addition with other message format
changes (it may be worth bundling in this case).

Ismael

On Tue, Jan 10, 2017 at 6:04 AM, Ewen Cheslack-Postava 
wrote:

> I don't mean to derail this at all, but given Ismael's question I'm
> wondering what exactly we consider a message format change? Are we assuming
> that additions like a new compression format (i.e. semantic changes where
> previously undefined bits, but bits which existed) don't require a format
> version change? I ask because we went the other way with protocol requests,
> but there are a number of outstanding KIPs (zstd compression and compaction
> tombstones off the top of my head) which people presumably were targeting
> for earlier than 0.11.0.0 but are, strictly speaking, "new" formats.
>
> I think the different definitions might make sense, but it is probably
> worth clarifying. And this would have important implications. We have a
> bunch of proposed structural changes upcoming in the message format, but
> obviously we'd like to keep the number of times folks have to think about
> this to a minimum. On the other hand, this means that semantic changes like
> supporting a specific compression format might force folks to stay on an
> earlier broker version if they want to avoid, e.g., producers accidentally
> sending data in a format consumers won't understand (i.e. in this case they
> may want all clients upgraded before all brokers, which is where separating
> log format and request/response versions comes in handy). The deletion
> tombstones might be an even better example here since the details are
> probably handled automatically by the library and it only takes one
> upgraded client to do the wrong thing and break lots of downstream systems.
>
> In other words, we potentially tie quite a few improvements that we might
> otherwise think of as incremental improvements only to major version
> releases. This could in turn force/encourage us to do major releases more
> frequently and we lose the benefit of infrequent message format changes, or
> we still do them at the same cadence and it takes forever for new
> incremental features to land.
>
> -Ewen
>
> On Mon, Jan 9, 2017 at 6:34 PM, Ismael Juma  wrote:
>
> > Hi Joel,
> >
> > Great to hear that LinkedIn is likely to implement KAFKA-4513. :)
> >
> > Yes, the KIP as it stands is a compromise given the situation. We could
> > push the deprecation to the subsequent release: likely to be 0.11.0.0
> since
> > there are a number of KIPs that require message format changes. This
> would
> > mean that the old consumers would not be removed before 0.12.0.0 (the
> major
> > release after 0.11.0.0). Would that work better for you all?
> >
> > Ismael
> >
> > On Tue, Jan 10, 2017 at 12:52 AM, Joel Koshy 
> wrote:
> >
> > > >
> > > >
> > > > The ideal scenario would be for us to provide a tool for no downtime
> > > > migration as discussed in the original thread (I filed
> > > > https://issues.apache.org/jira/browse/KAFKA-4513 in response to that
> > > > discussion). There are a few issues, however:
> > > >
> > > >- There doesn't seem to be much demand for it (outside of
> LinkedIn,
> > at
> > > >least)
> > > >- No-one is working on it or has indicated that they are planning
> to
> > > >work on it
> > > >- It's a non-trivial change and it requires a good amount of
> testing
> > > to
> > > >ensure it works as expected
> > > >
> > >
> > > For LinkedIn: while there are a few consuming applications for which
> the
> > > current shut-down and restart approach to migration will suffice, I
> doubt
> > > we will be able to do this for majority of services that are outside
> our
> > > direct control. Given that a seamless migration is a pre-req for us to
> > > switch to the new consumer widely (there are a few use-cases already on
> > it)
> > > it is something that we (LinkedIn) will likely implement although we
> > > haven't done further brainstorming/improvements beyond what was
> proposed
> > in
> > > the other deprecation thread.
> > >
> > >
> > > > In the meantime

Build failed in Jenkins: kafka-trunk-jdk8 #1162

2017-01-10 Thread Apache Jenkins Server
See 

Changes:

[me] HOTFIX: Convert exception introduced in KAFKA-3637 to warning since

[me] MINOR: Add Replication Quotas Test Rig

--
[...truncated 18072 lines...]

org.apache.kafka.streams.KafkaStreamsTest > testCannotStartOnceClosed PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCleanup STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCleanup PASSED

org.apache.kafka.streams.KafkaStreamsTest > testStartAndClose STARTED

org.apache.kafka.streams.KafkaStreamsTest > testStartAndClose PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCloseIsIdempotent STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCloseIsIdempotent PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCannotCleanupWhileRunning 
STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCannotCleanupWhileRunning PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCannotStartTwice STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCannotStartTwice PASSED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegion[0] STARTED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegion[0] PASSED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegion[1] STARTED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegion[1] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryState[0] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryState[0] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable[0] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable[0] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance[0] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance[0] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses[0] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses[0] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryState[1] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryState[1] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable[1] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable[1] FAILED
java.lang.AssertionError: Condition not met within timeout 3. waiting 
for store count-by-key
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:259)
at 
org.apache.kafka.streams.integration.QueryableStateIntegrationTest.shouldNotMakeStoreAvailableUntilAllStoresAvailable(QueryableStateIntegrationTest.java:502)

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance[1] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance[1] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses[1] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses[1] PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testShouldReadFromRegexAndNamedTopics STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testShouldReadFromRegexAndNamedTopics PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenCreated STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenCreated PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testMultipleConsumersCanReadFromPartitionedTopic STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testMultipleConsumersCanReadFromPartitionedTopic PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenDeleted STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenDeleted PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testNoMessagesSentExceptionFromOverlappingPatterns STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testNoMessagesSentExceptionFromOverlappingPatterns PASSED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > 
shouldCorrectlyRepartitionOnJoinOperations[0] STARTED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > 
shouldCor

Re: [DISCUSS] KIP-110: Add Codec for ZStandard Compression

2017-01-10 Thread Ismael Juma
Dongjin,

The KIP states:

"I compared the compressed size and compression time of 3 1kb-sized
messages (3102 bytes in total), with the Draft-implementation of ZStandard
Compression Codec and all currently available CompressionCodecs. All
elapsed times are the average of 20 trials."

But doesn't give any details of how this was implemented. Is the source
code available somewhere? Micro-benchmarking in the JVM is pretty tricky so
it needs verification before numbers can be trusted. A performance test
with kafka-producer-perf-test.sh would be nice to have as well, if possible.

Thanks,
Ismael

On Tue, Jan 10, 2017 at 7:44 AM, Dongjin Lee  wrote:

> Ismael,
>
> 1. Is the benchmark in the KIP page not enough? You mean we need a whole
> performance test using kafka-producer-perf-test.sh?
>
> 2. It seems like no major project is relying on it currently. However,
> after reviewing the code, I concluded that at least this project has a good
> test coverage. And for the problem of upstream tracking - although there is
> no significant update on ZStandard to judge this problem, it seems not bad.
> If required, I can take responsibility of the tracking for this library.
>
> Thanks,
> Dongjin
>
> On Tue, Jan 10, 2017 at 7:09 AM, Ismael Juma  wrote:
>
> > Thanks for posting the KIP, ZStandard looks like a nice improvement over
> > the existing compression algorithms. A couple of questions:
> >
> > 1. Can you please elaborate on the details of the benchmark?
> > 2. About https://github.com/luben/zstd-jni, can we rely on it? A few
> > things
> > to consider: are there other projects using it, does it have good test
> > coverage, are there performance tests, does it track upstream closely?
> >
> > Thanks,
> > Ismael
> >
> > On Fri, Jan 6, 2017 at 2:40 AM, Dongjin Lee  wrote:
> >
> > > Hi all,
> > >
> > > I've just posted a new KIP "KIP-110: Add Codec for ZStandard
> Compression"
> > > for
> > > discussion:
> > >
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > 110%3A+Add+Codec+for+ZStandard+Compression
> > >
> > > Please have a look when you are free.
> > >
> > > Best,
> > > Dongjin
> > >
> > > --
> > > *Dongjin Lee*
> > >
> > >
> > > *Software developer in Line+.So interested in massive-scale machine
> > > learning.facebook: www.facebook.com/dongjin.lee.kr
> > > linkedin:
> > > kr.linkedin.com/in/dongjinleekr
> > > github:
> > > github.com/dongjinleekr
> > > twitter: www.twitter.com/dongjinleekr
> > > *
> > >
> >
>
>
>
> --
> *Dongjin Lee*
>
>
> *Software developer in Line+.So interested in massive-scale machine
> learning.facebook: www.facebook.com/dongjin.lee.kr
> linkedin:
> kr.linkedin.com/in/dongjinleekr
> github:
> github.com/dongjinleekr
> twitter: www.twitter.com/dongjinleekr
> *
>


Re: [VOTE] KIP-105: Addition of Record Level for Sensors

2017-01-10 Thread Eno Thereska
Thanks for the votes, this is now adopted:

Binding +1s: Joel, Neha, Sriram, Guozhang, Ismael
Non-binding +1: Michael, Matthias, Bill, Damian.

Eno

> On 9 Jan 2017, at 22:04, Joel Koshy  wrote:
> 
> +1
> (although I added a minor comment on the discussion thread)
> 
> On Mon, Jan 9, 2017 at 3:43 AM, Michael Noll  wrote:
> 
>> +1 (non-binding)
>> 
>> On Fri, Jan 6, 2017 at 6:12 PM, Matthias J. Sax 
>> wrote:
>> 
>>> +1
>>> 
>>> On 1/6/17 9:09 AM, Neha Narkhede wrote:
 +1
 
 On Fri, Jan 6, 2017 at 9:04 AM Sriram Subramanian 
>>> wrote:
 
> +1
> 
> On Fri, Jan 6, 2017 at 8:40 AM, Bill Bejeck 
>> wrote:
> 
>> +1
>> 
>> On Fri, Jan 6, 2017 at 11:06 AM, Guozhang Wang 
> wrote:
>> 
>>> +1
>>> 
>>> On Fri, Jan 6, 2017 at 2:55 AM, Damian Guy 
> wrote:
>>> 
 +1
 
 On Fri, 6 Jan 2017 at 10:48 Ismael Juma  wrote:
 
> Thanks for the KIP, +1 (binding).
> 
> Ismael
> 
> On Fri, Jan 6, 2017 at 10:37 AM, Eno Thereska <
>> eno.there...@gmail.com>
> wrote:
> 
>> The discussion points for KIP-105 are addressed. At this point
> we'd
 like
>> to start the vote for it:
>> 
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
>> 105%3A+Addition+of+Record+Level+for+Sensors <
>>> https://cwiki.apache.org/
>> confluence/display/KAFKA/KIP-105:+Addition+of+Record+Level+
 for+Sensors>
>> 
>> Thanks
>> Eno and Aarti
> 
 
>>> 
>>> 
>>> 
>>> --
>>> -- Guozhang
>>> 
>> 
> 
>>> 
>>> 
>> 



[jira] [Commented] (KAFKA-4611) Support custom authentication mechanism

2017-01-10 Thread Edoardo Comar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15814508#comment-15814508
 ] 

Edoardo Comar commented on KAFKA-4611:
--

Hi, I am not sure what you are asking is not covered already :

https://issues.apache.org/jira/browse/KAFKA-4292 proposes custom callback 
handlers (therefore you control how to check username and pwd).

https://issues.apache.org/jira/browse/KAFKA-4259 added the functionality of 
defining jaas configurations in the client config file.

https://issues.apache.org/jira/browse/KAFKA-4180 will add the functionality of 
allowing different jaas config in a single client process.

Please note that Kerberos login uses JAAS too.



> Support custom authentication mechanism
> ---
>
> Key: KAFKA-4611
> URL: https://issues.apache.org/jira/browse/KAFKA-4611
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 0.10.0.0
>Reporter: mahendiran chandrasekar
>
> Currently there are two login mechanisms supported by kafka client.
> 1) Default Login / Abstract Login which uses JAAS authentication
> 2) Kerberos Login
> Supporting user defined login mechanism's would be nice. 
> This could be achieved by removing the limitation from 
> [here](https://github.com/apache/kafka/blob/0.10.0/clients/src/main/java/org/apache/kafka/common/security/authenticator/LoginManager.java#L44)
>  ... Instead get custom login module implemented by user from the configs, 
> gives users the option to implement custom login mechanism. 
> I am running into an issue in setting JAAS authentication system property on 
> all executors of my spark cluster. Having custom mechanism to authorize kafka 
> would be a good improvement for me



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4610) getting error:Batch containing 3 record(s) expired due to timeout while requesting metadata from brokers for test2R2P2-1

2017-01-10 Thread sandeep kumar singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15814494#comment-15814494
 ] 

sandeep kumar singh commented on KAFKA-4610:


thanks for reply. yes most of the messages get stored in kafka. say if i send 
10 messages then i see almost 99000 messages get stored. 

update: i only see this error when i restart the leader broker for topic 
test2R2P2-0. this topic has replication 2 and partition 2. i have 2 broker 
cluster in my setup. could you please suggest how can we avoid such failures?

on a separate note: i see data loss when running kafka-producer-perf-test.sh 
and killing one of the broker (in 2 node cluster) when the test is running. is 
this a expected behavior? i see same results for multiple tests.



> getting error:Batch containing 3 record(s) expired due to timeout while 
> requesting metadata from brokers for test2R2P2-1
> 
>
> Key: KAFKA-4610
> URL: https://issues.apache.org/jira/browse/KAFKA-4610
> Project: Kafka
>  Issue Type: Bug
> Environment: Dev
>Reporter: sandeep kumar singh
>
> i a getting below error when running producer client, which take messages 
> from an input file kafka_message.log. this log file is pilled with 10 
> records per second of each message of length 4096
> error - 
> [2017-01-09 14:45:24,813] ERROR Error when sending message to topic test2R2P2 
> with key: null, value: 4096 bytes with error: 
> (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
> org.apache.kafka.common.errors.TimeoutException: Batch containing 3 record(s) 
> expired due to timeout while requesting metadata from brokers for test2R2P2-0
> [2017-01-09 14:45:24,816] ERROR Error when sending message to topic test2R2P2 
> with key: null, value: 4096 bytes with error: 
> (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
> org.apache.kafka.common.errors.TimeoutException: Batch containing 3 record(s) 
> expired due to timeout while requesting metadata from brokers for test2R2P2-0
> [2017-01-09 14:45:24,816] ERROR Error when sending message to topic test2R2P2 
> with key: null, value: 4096 bytes with error: 
> (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
> org.apache.kafka.common.errors.TimeoutException: Batch containing 3 record(s) 
> expired due to timeout while requesting metadata from brokers for test2R2P2-0
> [2017-01-09 14:45:24,816] ERROR Error when sending message to topic test2R2P2 
> with key: null, value: 4096 bytes with error: 
> (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
> org.apache.kafka.common.errors.TimeoutException: Batch containing 3 record(s) 
> expired due to timeout while requesting metadata from brokers for test2R2P2-0
> [2017-01-09 14:45:24,816] ERROR Error when sending message to topic test2R2P2 
> with key: null, value: 4096 bytes with error: 
> (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
> org.apache.kafka.common.errors.TimeoutException: Batch containing 3 record(s) 
> expired due to timeout while requesting metadata from brokers for test2R2P2-0
> command i run :
> $ bin/kafka-console-producer.sh --broker-list x.x.x.x:,x.x.x.x: 
> --batch-size 1000 --message-send-max-retries 10 --request-required-acks 1 
> --topic test2R2P2 <~/kafka_message.log
> there are 2 brokers running and the topic has partitions = 2 and replication 
> factor 2. 
> Could you please help me understand what does that error means?
> also i see message loss when i manually restart one of the broker and while 
> kafak-producer-perf-test command is running? is this a expected behavior?
> thanks
> Sandeep



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-4569) Transient failure in org.apache.kafka.clients.consumer.KafkaConsumerTest.testWakeupWithFetchDataAvailable

2017-01-10 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15813892#comment-15813892
 ] 

Umesh Chaudhary edited comment on KAFKA-4569 at 1/10/17 9:18 AM:
-

Hi [~ijuma], One observation regarding the case. I saw that when I run the test 
testWakeupWithFetchDataAvailable it never fails but when I debug it with 
breakpoint, it always fails. Trying to get the reasons for debug failures. 


was (Author: umesh9...@gmail.com):
Hi [~ijuma], One observation regarding the case. I saw that when I run the test 
testWakeupWithFetchDataAvailable it never fails but when I debug it, it always 
fails. Trying to get the reasons for debug failures. 

> Transient failure in 
> org.apache.kafka.clients.consumer.KafkaConsumerTest.testWakeupWithFetchDataAvailable
> -
>
> Key: KAFKA-4569
> URL: https://issues.apache.org/jira/browse/KAFKA-4569
> Project: Kafka
>  Issue Type: Sub-task
>  Components: unit tests
>Reporter: Guozhang Wang
>Assignee: Umesh Chaudhary
>  Labels: newbie
> Fix For: 0.10.2.0
>
>
> One example is:
> https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/370/testReport/junit/org.apache.kafka.clients.consumer/KafkaConsumerTest/testWakeupWithFetchDataAvailable/
> {code}
> Stacktrace
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.fail(Assert.java:95)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumerTest.testWakeupWithFetchDataAvailable(KafkaConsumerTest.java:679)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:239)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
>   at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
> 

Build failed in Jenkins: kafka-trunk-jdk8 #1161

2017-01-10 Thread Apache Jenkins Server
See 

Changes:

[me] TRIVIAL: Fix spelling of log message

[me] MINOR: Fix small error in javadoc for persistent Stores

--
[...truncated 18065 lines...]

org.apache.kafka.streams.KafkaStreamsTest > testCannotStartOnceClosed PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCleanup STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCleanup PASSED

org.apache.kafka.streams.KafkaStreamsTest > testStartAndClose STARTED

org.apache.kafka.streams.KafkaStreamsTest > testStartAndClose PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCloseIsIdempotent STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCloseIsIdempotent PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCannotCleanupWhileRunning 
STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCannotCleanupWhileRunning PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCannotStartTwice STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCannotStartTwice PASSED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegion[0] STARTED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegion[0] PASSED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegion[1] STARTED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegion[1] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryState[0] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryState[0] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable[0] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable[0] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance[0] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance[0] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses[0] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses[0] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryState[1] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryState[1] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable[1] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable[1] FAILED
java.lang.AssertionError: Condition not met within timeout 3. waiting 
for store count-by-key
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:259)
at 
org.apache.kafka.streams.integration.QueryableStateIntegrationTest.shouldNotMakeStoreAvailableUntilAllStoresAvailable(QueryableStateIntegrationTest.java:502)

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance[1] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance[1] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses[1] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses[1] PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testShouldReadFromRegexAndNamedTopics STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testShouldReadFromRegexAndNamedTopics PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenCreated STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenCreated PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testMultipleConsumersCanReadFromPartitionedTopic STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testMultipleConsumersCanReadFromPartitionedTopic PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenDeleted STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenDeleted PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testNoMessagesSentExceptionFromOverlappingPatterns STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testNoMessagesSentExceptionFromOverlappingPatterns PASSED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > 
shouldCorrectlyRepartitionOnJoinOperations[0] STARTED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > 
shouldCorrectlyRepartit

[jira] [Commented] (KAFKA-4564) When the destination brokers are down or misconfigured in config, Streams should fail fast

2017-01-10 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15814359#comment-15814359
 ] 

Umesh Chaudhary commented on KAFKA-4564:


Hi [~guozhang], [~ewencp], I am thinking to create a KafkaProducer (using the 
properties wrapped in the StreamsConfig) inside the constructor of StreamTask 
class to verify whether user has configured the bootstrap listed correctly or 
not. For misconfigured bootstrap list, we can throw the appropriate exception 
here itself without proceeding further. 

Please suggest if that was the expectation form this JIRA and correct me if I 
am wrong here.

> When the destination brokers are down or misconfigured in config, Streams 
> should fail fast
> --
>
> Key: KAFKA-4564
> URL: https://issues.apache.org/jira/browse/KAFKA-4564
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Umesh Chaudhary
>  Labels: newbie
>
> Today if Kafka is down or users misconfigure the bootstrap list, Streams may 
> just hangs for a while without any error messages even with the log4j 
> enabled, which is quite confusing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1957: MINOR: Add Replication Quotas Test Rig

2017-01-10 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1957


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-4548) Add CompatibilityTest to verify that individual features are supported or not by the broker we're connecting to

2017-01-10 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-4548:
-
Fix Version/s: 0.10.2.0

> Add CompatibilityTest to verify that individual features are supported or not 
> by the broker we're connecting to
> ---
>
> Key: KAFKA-4548
> URL: https://issues.apache.org/jira/browse/KAFKA-4548
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, system tests, unit tests
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
> Fix For: 0.10.2.0
>
>
> Add CompatibilityTest to verify that individual features are supported or not 
> by the broker we're connecting to.  This can be used in a ducktape test to 
> verify that the feature is present or absent.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4507) The client should send older versions of requests to the broker if necessary

2017-01-10 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-4507:
-
Priority: Blocker  (was: Major)

> The client should send older versions of requests to the broker if necessary
> 
>
> Key: KAFKA-4507
> URL: https://issues.apache.org/jira/browse/KAFKA-4507
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
>Priority: Blocker
> Fix For: 0.10.2.0
>
>
> The client should send older versions of requests to the broker if necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4457) Add a command to list the broker version information

2017-01-10 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-4457:
-
Priority: Blocker  (was: Major)

> Add a command to list the broker version information
> 
>
> Key: KAFKA-4457
> URL: https://issues.apache.org/jira/browse/KAFKA-4457
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients
>Affects Versions: 0.10.1.1
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
>Priority: Blocker
> Fix For: 0.10.2.0
>
>
> Add a command to list the broker version information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4507) The client should send older versions of requests to the broker if necessary

2017-01-10 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-4507:
-
Fix Version/s: 0.10.2.0

> The client should send older versions of requests to the broker if necessary
> 
>
> Key: KAFKA-4507
> URL: https://issues.apache.org/jira/browse/KAFKA-4507
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
> Fix For: 0.10.2.0
>
>
> The client should send older versions of requests to the broker if necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4457) Add a command to list the broker version information

2017-01-10 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-4457:
-
Fix Version/s: 0.10.2.0

> Add a command to list the broker version information
> 
>
> Key: KAFKA-4457
> URL: https://issues.apache.org/jira/browse/KAFKA-4457
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients
>Affects Versions: 0.10.1.1
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
> Fix For: 0.10.2.0
>
>
> Add a command to list the broker version information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2283: HOTFIX: Convert exception to warning since clearly...

2017-01-10 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2283


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---