[jira] [Comment Edited] (KAFKA-1944) Rename LogCleaner and related classes to LogCompactor

2017-07-31 Thread Pranav Maniar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16108462#comment-16108462
 ] 

Pranav Maniar edited comment on KAFKA-1944 at 8/1/17 6:39 AM:
--

It seems that there are agreement/disagreement related to change... 
So should I create KIP or first we discuss it over JIRA/mailing list ? 

Also, what about other cleaner configs apart from {{log.cleaner.enable}} ? Does 
any of the other cleaner config also requires renaming ? E.g. 
{code}
log.cleaner.backoff.ms
log.cleaner.delete.retention.ms
log.cleaner.min.compaction.lag.ms
log.cleaner.threads
...
{code}


was (Author: pranav.maniar):
It seems that there are agreement/disagreement related to change... 
So should I create KIP or first we discuss it over JIRA/mailing list ? 

Also, what about other cleaner configs apart from {{log.cleaner.enable}} ? Does 
any of the other cleaner config also requires renaming ? E.g. 
{code}
log.cleaner.backoff.ms
log.cleaner.delete.retention.ms
log.cleaner.min.compaction.lag.ms
log.cleaner.threads
{code}

> Rename LogCleaner and related classes to LogCompactor
> -
>
> Key: KAFKA-1944
> URL: https://issues.apache.org/jira/browse/KAFKA-1944
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Pranav Maniar
>  Labels: newbie
>
> Following a mailing list discussion:
> "the name LogCleaner is seriously misleading. Its more of a log compactor. 
> Deleting old logs happens elsewhere from what I've seen."
> Note that this may require renaming related classes, objects, configs and 
> metrics.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (KAFKA-1944) Rename LogCleaner and related classes to LogCompactor

2017-07-31 Thread Pranav Maniar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16108462#comment-16108462
 ] 

Pranav Maniar edited comment on KAFKA-1944 at 8/1/17 6:39 AM:
--

It seems that there are agreement/disagreement related to change... 
So should I create KIP or first we discuss it over JIRA/mailing list ? 

Also, what about other cleaner configs apart from {{log.cleaner.enable}} ? Does 
any of the other cleaner config also requires renaming ? E.g. 
{code}
log.cleaner.backoff.ms
log.cleaner.delete.retention.ms
log.cleaner.min.compaction.lag.ms
log.cleaner.threads
{code}


was (Author: pranav.maniar):
It seems that there are agreement/disagreement related to change... 
So should I create KIP or first we discuss it over JIRA/mailing list ? 

Also, what about other cleaner configs apart from {{log.cleaner.enable}} ? Does 
any of the other cleaner config also requires renaming ? E.g. 


> Rename LogCleaner and related classes to LogCompactor
> -
>
> Key: KAFKA-1944
> URL: https://issues.apache.org/jira/browse/KAFKA-1944
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Pranav Maniar
>  Labels: newbie
>
> Following a mailing list discussion:
> "the name LogCleaner is seriously misleading. Its more of a log compactor. 
> Deleting old logs happens elsewhere from what I've seen."
> Note that this may require renaming related classes, objects, configs and 
> metrics.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-1944) Rename LogCleaner and related classes to LogCompactor

2017-07-31 Thread Pranav Maniar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16108462#comment-16108462
 ] 

Pranav Maniar commented on KAFKA-1944:
--

It seems that there are agreement/disagreement related to change... 
So should I create KIP or first we discuss it over JIRA/mailing list ? 

Also, what about other cleaner configs apart from {{log.cleaner.enable}} ? Does 
any of the other cleaner config also requires renaming ? E.g. 


> Rename LogCleaner and related classes to LogCompactor
> -
>
> Key: KAFKA-1944
> URL: https://issues.apache.org/jira/browse/KAFKA-1944
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Pranav Maniar
>  Labels: newbie
>
> Following a mailing list discussion:
> "the name LogCleaner is seriously misleading. Its more of a log compactor. 
> Deleting old logs happens elsewhere from what I've seen."
> Note that this may require renaming related classes, objects, configs and 
> metrics.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5683) Dedicated thread pool for Kafka Streams tasks

2017-07-31 Thread Abhishek Gupta (JIRA)
Abhishek Gupta created KAFKA-5683:
-

 Summary: Dedicated thread pool for Kafka Streams tasks
 Key: KAFKA-5683
 URL: https://issues.apache.org/jira/browse/KAFKA-5683
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Abhishek Gupta
Priority: Minor


Allow users to assign a specific thread pool (ExecutorService) for a Kafka 
Streams tasks e.g.
ExecutorService pool = ;
new KafkaStreams(topology, config).start(pool);

This will be particularly helpful in environments where the container/runtime 
provides a facility to spawn 'managed' threads



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-2526) Console Producer / Consumer's serde config is not working

2017-07-31 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16108328#comment-16108328
 ] 

Ewen Cheslack-Postava commented on KAFKA-2526:
--

What is the proposed fix? I don't know that I'd spend too much time on a patch 
unless there's some agreement about the fix (aside from prototyping purposes). 
The simplest fix is to just throw an error if they try to set the serializers 
to anything else to warn them that it won't work.

> Console Producer / Consumer's serde config is not working
> -
>
> Key: KAFKA-2526
> URL: https://issues.apache.org/jira/browse/KAFKA-2526
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Mayuresh Gharat
>  Labels: newbie
>
> Although in the console producer one can specify the key value serializer, 
> they are actually not used since 1) it always serialize the input string as 
> String.getBytes (hence always pre-assume the string serializer) and 2) it is 
> actually only passed into the old producer. The same issues exist in console 
> consumer.
> In addition the configs in the console producer is messy: we have 1) some 
> config values exposed as cmd parameters, and 2) some config values in 
> --producer-property and 3) some in --property.
> It will be great to clean the configs up in both console producer and 
> consumer, and put them into a single --property parameter which could 
> possibly take a file to reading in property values as well, and only leave 
> --new-producer as the other command line parameter.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5671) Add StreamsBuilder and deprecate KStreamBuilder

2017-07-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16108211#comment-16108211
 ] 

ASF GitHub Bot commented on KAFKA-5671:
---

GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/3603

KAFKA-5671 Followup: Remove reflections in unit test classes

1. Remove rest deprecation warnings in streams:jar.

2. Consolidate all unit test classes' reflections to access internal 
topology builder from packages other than `o.a.k.streams`. We need to refactor 
the hierarchies of StreamTask, StreamThread and KafkaStreams to remove these 
hacky reflections.

3. Minor fixes such as reference path, etc

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka K5671-followup-comments

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3603.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3603


commit 86e1c22a223c12180ff192befe76c91f9618435c
Author: Guozhang Wang 
Date:   2017-08-01T00:31:58Z

Remove reflections in unit test classes




> Add StreamsBuilder and deprecate KStreamBuilder
> ---
>
> Key: KAFKA-5671
> URL: https://issues.apache.org/jira/browse/KAFKA-5671
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
> Fix For: 1.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5621) The producer should retry expired batches when retries are enabled

2017-07-31 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16108156#comment-16108156
 ] 

Jiangjie Qin commented on KAFKA-5621:
-

[~ijuma] Like [~apurva] said, for users who care about orders, when an 
exception was thrown from the sender thread the users cannot do much other than 
close the producer immediately in the sender thread and do some error handling 
logic such as fail over or alerting, etc. Some application may recreate a 
producer and retry the unsent messages again, but that may or may not fly. In 
any case, I think waiting/blocking indefinitely for a message to be sent is 
considered bad.

> The producer should retry expired batches when retries are enabled
> --
>
> Key: KAFKA-5621
> URL: https://issues.apache.org/jira/browse/KAFKA-5621
> Project: Kafka
>  Issue Type: Bug
>Reporter: Apurva Mehta
>Assignee: Apurva Mehta
> Fix For: 1.0.0
>
>
> Today, when a batch is expired in the accumulator, a {{TimeoutException}} is 
> raised to the user.
> It might be better the producer to retry the expired batch rather up to the 
> configured number of retries. This is more intuitive from the user's point of 
> view. 
> Further the proposed behavior makes it easier for applications like mirror 
> maker to provide ordering guarantees even when batches expire. Today, they 
> would resend the expired batch and it would get added to the back of the 
> queue, causing the output ordering to be different from the input ordering.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5621) The producer should retry expired batches when retries are enabled

2017-07-31 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16108155#comment-16108155
 ] 

Ismael Juma commented on KAFKA-5621:


[~apurva], if the producer is blocked for more than max.block.ms because the 
buffer is full, then it will throw an exception. So that scenario is already 
controlled by a separate config.

> The producer should retry expired batches when retries are enabled
> --
>
> Key: KAFKA-5621
> URL: https://issues.apache.org/jira/browse/KAFKA-5621
> Project: Kafka
>  Issue Type: Bug
>Reporter: Apurva Mehta
>Assignee: Apurva Mehta
> Fix For: 1.0.0
>
>
> Today, when a batch is expired in the accumulator, a {{TimeoutException}} is 
> raised to the user.
> It might be better the producer to retry the expired batch rather up to the 
> configured number of retries. This is more intuitive from the user's point of 
> view. 
> Further the proposed behavior makes it easier for applications like mirror 
> maker to provide ordering guarantees even when batches expire. Today, they 
> would resend the expired batch and it would get added to the back of the 
> queue, causing the output ordering to be different from the input ordering.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5621) The producer should retry expired batches when retries are enabled

2017-07-31 Thread Apurva Mehta (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16108141#comment-16108141
 ] 

Apurva Mehta commented on KAFKA-5621:
-

[~ijuma], can't speak for [~becket_qin], but it seems that the basic 
application could just raise an exception and exit if it can produce for a 
certain amount of time. That seems like better semantics than just being 
blocked on a send because the producer buffers are full. That's why bounding 
the amount of time to wait before erroring out a request makes sense even in 
general IMO.

Of course, there may be applications which could do something more meaningful 
when batches are expired. And not all applications care about ordering either.

> The producer should retry expired batches when retries are enabled
> --
>
> Key: KAFKA-5621
> URL: https://issues.apache.org/jira/browse/KAFKA-5621
> Project: Kafka
>  Issue Type: Bug
>Reporter: Apurva Mehta
>Assignee: Apurva Mehta
> Fix For: 1.0.0
>
>
> Today, when a batch is expired in the accumulator, a {{TimeoutException}} is 
> raised to the user.
> It might be better the producer to retry the expired batch rather up to the 
> configured number of retries. This is more intuitive from the user's point of 
> view. 
> Further the proposed behavior makes it easier for applications like mirror 
> maker to provide ordering guarantees even when batches expire. Today, they 
> would resend the expired batch and it would get added to the back of the 
> queue, causing the output ordering to be different from the input ordering.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5621) The producer should retry expired batches when retries are enabled

2017-07-31 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16108130#comment-16108130
 ] 

Ismael Juma commented on KAFKA-5621:


[~becket_qin], you said a couple of times that if delivery can't be achieved 
within a certain amount of time, then users may want to get an exception and do 
"something". Can you please elaborate on what this is for an application that 
cares about ordering?

> The producer should retry expired batches when retries are enabled
> --
>
> Key: KAFKA-5621
> URL: https://issues.apache.org/jira/browse/KAFKA-5621
> Project: Kafka
>  Issue Type: Bug
>Reporter: Apurva Mehta
>Assignee: Apurva Mehta
> Fix For: 1.0.0
>
>
> Today, when a batch is expired in the accumulator, a {{TimeoutException}} is 
> raised to the user.
> It might be better the producer to retry the expired batch rather up to the 
> configured number of retries. This is more intuitive from the user's point of 
> view. 
> Further the proposed behavior makes it easier for applications like mirror 
> maker to provide ordering guarantees even when batches expire. Today, they 
> would resend the expired batch and it would get added to the back of the 
> queue, causing the output ordering to be different from the input ordering.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5621) The producer should retry expired batches when retries are enabled

2017-07-31 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16108095#comment-16108095
 ] 

Jason Gustafson commented on KAFKA-5621:


Interesting discussion. Apurva's suggestion on a configuration to bound max 
delivery time sounds promising. Seems much easier for users to think about than 
some combination of retries, request timeout, batch timeout, and linger.

> The producer should retry expired batches when retries are enabled
> --
>
> Key: KAFKA-5621
> URL: https://issues.apache.org/jira/browse/KAFKA-5621
> Project: Kafka
>  Issue Type: Bug
>Reporter: Apurva Mehta
>Assignee: Apurva Mehta
> Fix For: 1.0.0
>
>
> Today, when a batch is expired in the accumulator, a {{TimeoutException}} is 
> raised to the user.
> It might be better the producer to retry the expired batch rather up to the 
> configured number of retries. This is more intuitive from the user's point of 
> view. 
> Further the proposed behavior makes it easier for applications like mirror 
> maker to provide ordering guarantees even when batches expire. Today, they 
> would resend the expired batch and it would get added to the back of the 
> queue, causing the output ordering to be different from the input ordering.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5671) Add StreamsBuilder and deprecate KStreamBuilder

2017-07-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16108091#comment-16108091
 ] 

ASF GitHub Bot commented on KAFKA-5671:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3602


> Add StreamsBuilder and deprecate KStreamBuilder
> ---
>
> Key: KAFKA-5671
> URL: https://issues.apache.org/jira/browse/KAFKA-5671
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
> Fix For: 1.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5671) Add StreamsBuilder and deprecate KStreamBuilder

2017-07-31 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-5671.
--
Resolution: Fixed

Issue resolved by pull request 3602
[https://github.com/apache/kafka/pull/3602]

> Add StreamsBuilder and deprecate KStreamBuilder
> ---
>
> Key: KAFKA-5671
> URL: https://issues.apache.org/jira/browse/KAFKA-5671
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
> Fix For: 1.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5621) The producer should retry expired batches when retries are enabled

2017-07-31 Thread Apurva Mehta (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16108067#comment-16108067
 ] 

Apurva Mehta commented on KAFKA-5621:
-

[~becket_qin] shall we then post kip-91 to the community list and begin the 
discussion?

> The producer should retry expired batches when retries are enabled
> --
>
> Key: KAFKA-5621
> URL: https://issues.apache.org/jira/browse/KAFKA-5621
> Project: Kafka
>  Issue Type: Bug
>Reporter: Apurva Mehta
>Assignee: Apurva Mehta
> Fix For: 1.0.0
>
>
> Today, when a batch is expired in the accumulator, a {{TimeoutException}} is 
> raised to the user.
> It might be better the producer to retry the expired batch rather up to the 
> configured number of retries. This is more intuitive from the user's point of 
> view. 
> Further the proposed behavior makes it easier for applications like mirror 
> maker to provide ordering guarantees even when batches expire. Today, they 
> would resend the expired batch and it would get added to the back of the 
> queue, causing the output ordering to be different from the input ordering.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (KAFKA-5621) The producer should retry expired batches when retries are enabled

2017-07-31 Thread Apurva Mehta (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16108067#comment-16108067
 ] 

Apurva Mehta edited comment on KAFKA-5621 at 7/31/17 10:15 PM:
---

[~becket_qin] shall we then post kip-91 to the community list and begin the 
discussion?

[~ijuma] does that sound reasonable?


was (Author: apurva):
[~becket_qin] shall we then post kip-91 to the community list and begin the 
discussion?

> The producer should retry expired batches when retries are enabled
> --
>
> Key: KAFKA-5621
> URL: https://issues.apache.org/jira/browse/KAFKA-5621
> Project: Kafka
>  Issue Type: Bug
>Reporter: Apurva Mehta
>Assignee: Apurva Mehta
> Fix For: 1.0.0
>
>
> Today, when a batch is expired in the accumulator, a {{TimeoutException}} is 
> raised to the user.
> It might be better the producer to retry the expired batch rather up to the 
> configured number of retries. This is more intuitive from the user's point of 
> view. 
> Further the proposed behavior makes it easier for applications like mirror 
> maker to provide ordering guarantees even when batches expire. Today, they 
> would resend the expired batch and it would get added to the back of the 
> queue, causing the output ordering to be different from the input ordering.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5621) The producer should retry expired batches when retries are enabled

2017-07-31 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16108050#comment-16108050
 ] 

Jiangjie Qin commented on KAFKA-5621:
-

[~apurva] Thanks for the explanation. I think we are on the same page that all 
the users (whether applications or MM) would want the producer to
1. handle temporary error as much as possible. 
2. raise an error to the applications after some reasonable amount of time 
instead of silently retry forever

The concern I have for the current proposal of this ticket is that when user 
set retries to a big number, we will fail on (2). As you can imagine when user 
call flush() it will basically block forever.

The proposed new configurations sounds reasonable to me, we can discussed in 
KIP-91. It would be helpful to see how users would configure the producer in 
different scenarios and see if that makes sense.

> The producer should retry expired batches when retries are enabled
> --
>
> Key: KAFKA-5621
> URL: https://issues.apache.org/jira/browse/KAFKA-5621
> Project: Kafka
>  Issue Type: Bug
>Reporter: Apurva Mehta
>Assignee: Apurva Mehta
> Fix For: 1.0.0
>
>
> Today, when a batch is expired in the accumulator, a {{TimeoutException}} is 
> raised to the user.
> It might be better the producer to retry the expired batch rather up to the 
> configured number of retries. This is more intuitive from the user's point of 
> view. 
> Further the proposed behavior makes it easier for applications like mirror 
> maker to provide ordering guarantees even when batches expire. Today, they 
> would resend the expired batch and it would get added to the back of the 
> queue, causing the output ordering to be different from the input ordering.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5638) Inconsistency in consumer group related ACLs

2017-07-31 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16108007#comment-16108007
 ] 

Jason Gustafson commented on KAFKA-5638:


[~vahid] Interesting point. Maybe we could just extend the current behavior? If 
the user has {{Describe(Cluster)}}, we can list all groups as we do currently. 
Otherwise, we can restrict the set of groups to only those that the user has 
{{Describe(Group)}} permission for?

> Inconsistency in consumer group related ACLs
> 
>
> Key: KAFKA-5638
> URL: https://issues.apache.org/jira/browse/KAFKA-5638
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.11.0.0
>Reporter: Vahid Hashemian
>Assignee: Vahid Hashemian
>Priority: Minor
>  Labels: needs-kip
>
> Users can see all groups in the cluster (using consumer group’s {{--list}} 
> option) provided that they have {{Describe}} access to the cluster. It would 
> make more sense to modify that experience and limit what is listed in the 
> output to only those groups they have {{Describe}} access to. The reason is, 
> almost everything else is accessible by a user only if the access is 
> specifically granted (through ACL {{--add}}); and this scenario should not be 
> an exception. The potential change would be updating the minimum required 
> permission of {{ListGroup}} from {{Describe (Cluster)}} to {{Describe 
> (Group)}}.
> We can also look at this issue from a different angle: A user with {{Read}} 
> access to a group can describe the group, but the same user would not see 
> anything when listing groups (assuming there is no {{Describe}} access to the 
> cluster). It makes more sense for this user to be able to list all groups 
> s/he can already describe.
> It would be great to know if any user is relying on the existing behavior 
> (listing all consumer groups using a {{Describe (Cluster)}} ACL).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-1944) Rename LogCleaner and related classes to LogCompactor

2017-07-31 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16107997#comment-16107997
 ] 

Jason Gustafson commented on KAFKA-1944:


I personally don't think it's worth introducing alternative configs just so we 
can rename classes internally. The naming is unfortunate, but many users have 
gotten used to it and having multiple config names seems rather annoying 
(deprecation is a slow process). It sounds like we are in agreement on 
deprecating {{log.cleaner.enable}}? Maybe we should do that in a separate 
JIRA/KIP and consider closing this as "won't fix"?

> Rename LogCleaner and related classes to LogCompactor
> -
>
> Key: KAFKA-1944
> URL: https://issues.apache.org/jira/browse/KAFKA-1944
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Pranav Maniar
>  Labels: newbie
>
> Following a mailing list discussion:
> "the name LogCleaner is seriously misleading. Its more of a log compactor. 
> Deleting old logs happens elsewhere from what I've seen."
> Note that this may require renaming related classes, objects, configs and 
> metrics.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5621) The producer should retry expired batches when retries are enabled

2017-07-31 Thread Apurva Mehta (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16107971#comment-16107971
 ] 

Apurva Mehta commented on KAFKA-5621:
-

Thanks for your comment Becket, and sorry for the delay in responding. My 
responses are inline:

{quote}Is it really different from applications and MM when a partition cannot 
make progress? It seems in both cases the users would want to know that at some 
point and handle it? I think retries are also for this purpose, otherwise we 
may block forever. If I understand right, what this ticket is proposing is just 
to extend the batch expiration time from request.timeout.ms to 
request.timeout.ms * reties. And KIP-91 proposes having an additional explicit 
configuration for that batch expiration time instead of deriving it from 
request timeout. They seem not quite different except that KIP-91 decouples the 
configurations from each other.
{quote}

This is a good question. Let me try to explain my point of view in more detail. 

When I talk about an 'application', I mean software which is using Kafka to 
solve some business problem. In this context, the partitions of a topic are 
more akin to an implementation detail to help with scaling throughput. From the 
point of view of application correctness, no partition can be left behind. 

Of course, not all applications fit this profile, but a significant number do 
(for instance, many streams applications). And for these applications, there 
should be a mode where Kafka does as much work as possible to ensure messages 
are delivered, because error handling is hard to reason about. For instance, an 
application level resend might introduce duplicates, and writing de-dup 
infrastructure is expensive and error prone --might as well rely on Kafka to do 
dedup for the application as much as possible. This is the motivation for the 
proposal in the current JIRA.

This contrasts with the MirrorMaker use case. If MirrorMaker is replicating 
1000 partitions, and there is some failure, it is still better to replicate 900 
partitions rather than 0 partitions. 

Having written all that, I think agree with you that there is value in adding a 
config to control the maximum time to wait for an acknowledgement, essentially 
your {{expiry.ms}} config. It might be more intuitive to name it something like 
{{message.max.delivery.wait.ms}}. Further, we can enforce that this is set to a 
minimum {{request.timeout.ms + linger.ms}}, which means that there would be at 
least one attempt to send the message when the producer isn't backed up. By 
default, we can leave it pretty high. 

So, we would then have the following: 

{{retries}} -- current meaning.
{{request.timeout.ms}} -- current meaning, but messages are not expired after 
this time.
{{message.max.delivery.wait.ms}} -- new config, controls how long to try to 
send messages before erroring them out.

I like this scheme. It doesn't expose users to the notion of accumulator queues 
(by avoiding any mention of 'batch'). It enables applications to delegate error 
handling to Kafka to the maximum possible extent (by setting 
{{retries=MAX_INT}} and {{message.max.delivery.wait.ms=MAX_LONG}}). And it 
enables MirrorMaker to bound the effect of unavailable partitions by setting 
{{message.max.delivery.wait.ms}} to be sufficiently low, presumably some 
function of the expected throughput in the steady state.

So in effect, I am in favor of KIP-91 with a few tweaks for the config name, 
it's default value, and it's semantics. What do the rest of you think?




> The producer should retry expired batches when retries are enabled
> --
>
> Key: KAFKA-5621
> URL: https://issues.apache.org/jira/browse/KAFKA-5621
> Project: Kafka
>  Issue Type: Bug
>Reporter: Apurva Mehta
>Assignee: Apurva Mehta
> Fix For: 1.0.0
>
>
> Today, when a batch is expired in the accumulator, a {{TimeoutException}} is 
> raised to the user.
> It might be better the producer to retry the expired batch rather up to the 
> configured number of retries. This is more intuitive from the user's point of 
> view. 
> Further the proposed behavior makes it easier for applications like mirror 
> maker to provide ordering guarantees even when batches expire. Today, they 
> would resend the expired batch and it would get added to the back of the 
> queue, causing the output ordering to be different from the input ordering.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (KAFKA-960) Upgrade Metrics to 3.x

2017-07-31 Thread Gennady Feldman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16107720#comment-16107720
 ] 

Gennady Feldman edited comment on KAFKA-960 at 7/31/17 9:04 PM:


Latest metrics from dropwizard (new location) is 3.2.3


was (Author: gena01):
Latest metrics from dropwizard (new location) is 3.2.3 and has some interesting 
features for ops teams (reporting metrics via HTTP): 
http://metrics.dropwizard.io/3.2.3/getting-started.html#reporting-via-http

> Upgrade Metrics to 3.x
> --
>
> Key: KAFKA-960
> URL: https://issues.apache.org/jira/browse/KAFKA-960
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.1
>Reporter: Cosmin Lehene
>
> Now that metrics 3.0 has been released 
> (http://metrics.codahale.com/about/release-notes/) we can upgrade back



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5682) Consumer should include partition in exceptions raised during record parsing/validation

2017-07-31 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-5682:
---
Labels: needs-kip  (was: )

> Consumer should include partition in exceptions raised during record 
> parsing/validation
> ---
>
> Key: KAFKA-5682
> URL: https://issues.apache.org/jira/browse/KAFKA-5682
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Reporter: Jason Gustafson
>  Labels: needs-kip
> Fix For: 1.0.0
>
>
> When we encounter an exception when validating a fetched record or when 
> deserializing it, we raise it to the user and keep the consumer's current 
> position at the offset of the failed record. The expectation is that the user 
> will either propagate the exception and shutdown or seek past the failed 
> record. However, in the latter case, there is no way for the user to know 
> which topic partition had the failed record. We should consider exposing an 
> exception type to expose this information which users can catch. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5682) Consumer should include partition in exceptions raised during record parsing/validation

2017-07-31 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-5682:
--

 Summary: Consumer should include partition in exceptions raised 
during record parsing/validation
 Key: KAFKA-5682
 URL: https://issues.apache.org/jira/browse/KAFKA-5682
 Project: Kafka
  Issue Type: Improvement
  Components: consumer
Reporter: Jason Gustafson
 Fix For: 1.0.0


When we encounter an exception when validating a fetched record or when 
deserializing it, we raise it to the user and keep the consumer's current 
position at the offset of the failed record. The expectation is that the user 
will either propagate the exception and shutdown or seek past the failed 
record. However, in the latter case, there is no way for the user to know which 
topic partition had the failed record. We should consider exposing an exception 
type to expose this information which users can catch. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (KAFKA-1120) Controller could miss a broker state change

2017-07-31 Thread James Cheng (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16107857#comment-16107857
 ] 

James Cheng edited comment on KAFKA-1120 at 7/31/17 7:53 PM:
-

[~noslowerdna] [~junrao],

I retested this will Kafka 0.11. The problem still exists.

I followed the steps from my  24/Feb/17 22:57 comment. I ran it maybe 10 times 
in a row. Every single time, the broker that I restarted came back up and did 
not take leadership for any partitions. In addition, it only became a follower 
for about half the partitions.

The fact that it became follower for half the partitions shows that the 
controller is at least aware that the broker exists (that is, the controller 
successfully saw the broker come back online.). But the controller didn't tell 
the broker to follow all the partitions that it should have.



was (Author: wushujames):
Hi,

I retested this will Kafka 0.11. The problem still exists.

I followed the steps from my  24/Feb/17 22:57 comment. I ran it maybe 10 times 
in a row. Every single time, the broker that I restarted came back up and did 
not take leadership for any partitions. In addition, it only became a follower 
for about half the partitions.

The fact that it became follower for half the partitions shows that the 
controller is at least aware that the broker exists (that is, the controller 
successfully saw the broker come back online.). But the controller didn't tell 
the broker to follow all the partitions that it should have.


> Controller could miss a broker state change 
> 
>
> Key: KAFKA-1120
> URL: https://issues.apache.org/jira/browse/KAFKA-1120
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.1
>Reporter: Jun Rao
>  Labels: reliability
>
> When the controller is in the middle of processing a task (e.g., preferred 
> leader election, broker change), it holds a controller lock. During this 
> time, a broker could have de-registered and re-registered itself in ZK. After 
> the controller finishes processing the current task, it will start processing 
> the logic in the broker change listener. However, it will see no broker 
> change and therefore won't do anything to the restarted broker. This broker 
> will be in a weird state since the controller doesn't inform it to become the 
> leader of any partition. Yet, the cached metadata in other brokers could 
> still list that broker as the leader for some partitions. Client requests 
> routed to that broker will then get a TopicOrPartitionNotExistException. This 
> broker will continue to be in this bad state until it's restarted again.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-1120) Controller could miss a broker state change

2017-07-31 Thread James Cheng (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16107857#comment-16107857
 ] 

James Cheng commented on KAFKA-1120:


Hi,

I retested this will Kafka 0.11. The problem still exists.

I followed the steps from my  24/Feb/17 22:57 comment. I ran it maybe 10 times 
in a row. Every single time, the broker that I restarted came back up and did 
not take leadership for any partitions. In addition, it only became a follower 
for about half the partitions.

The fact that it became follower for half the partitions shows that the 
controller is at least aware that the broker exists (that is, the controller 
successfully saw the broker come back online.). But the controller didn't tell 
the broker to follow all the partitions that it should have.


> Controller could miss a broker state change 
> 
>
> Key: KAFKA-1120
> URL: https://issues.apache.org/jira/browse/KAFKA-1120
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.1
>Reporter: Jun Rao
>  Labels: reliability
>
> When the controller is in the middle of processing a task (e.g., preferred 
> leader election, broker change), it holds a controller lock. During this 
> time, a broker could have de-registered and re-registered itself in ZK. After 
> the controller finishes processing the current task, it will start processing 
> the logic in the broker change listener. However, it will see no broker 
> change and therefore won't do anything to the restarted broker. This broker 
> will be in a weird state since the controller doesn't inform it to become the 
> leader of any partition. Yet, the cached metadata in other brokers could 
> still list that broker as the leader for some partitions. Client requests 
> routed to that broker will then get a TopicOrPartitionNotExistException. This 
> broker will continue to be in this bad state until it's restarted again.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5681) jarAll does not build all scala versions anymore.

2017-07-31 Thread Jiangjie Qin (JIRA)
Jiangjie Qin created KAFKA-5681:
---

 Summary: jarAll does not build all scala versions anymore.
 Key: KAFKA-5681
 URL: https://issues.apache.org/jira/browse/KAFKA-5681
 Project: Kafka
  Issue Type: Bug
  Components: build
Affects Versions: 0.11.0.0
Reporter: Jiangjie Qin
Assignee: Jiangjie Qin
 Fix For: 0.11.0.1


./gradlew jarAll no longer builds jars for all scala versions. We should use 
{{availableScalaVersions}} instead of {{defaultScalaVersions}} when build. We 
probably should consider backporting the fix to 0.11.0.0.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-1944) Rename LogCleaner and related classes to LogCompactor

2017-07-31 Thread Pranav Maniar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranav Maniar reassigned KAFKA-1944:


Assignee: Pranav Maniar  (was: Aravind Selvan)

> Rename LogCleaner and related classes to LogCompactor
> -
>
> Key: KAFKA-1944
> URL: https://issues.apache.org/jira/browse/KAFKA-1944
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Pranav Maniar
>  Labels: newbie
>
> Following a mailing list discussion:
> "the name LogCleaner is seriously misleading. Its more of a log compactor. 
> Deleting old logs happens elsewhere from what I've seen."
> Note that this may require renaming related classes, objects, configs and 
> metrics.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5505) Connect: Do not restart connector and existing tasks on task-set change

2017-07-31 Thread Dan Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16107727#comment-16107727
 ] 

Dan Collins commented on KAFKA-5505:


Hi, I wanted to add on to this given we are working around the same behavior, 
but for a different use case. We have an orchestration service that takes care 
of wiring sinks out of Kafka using Connect and some of the orchestration 
scenarios we have require us to setup or modify tens of configurations in 
Connect at a time. Our current approach to work around this is to retry with 
backoff and pre-package connector configurations, but it seems like there 
should be a better way. It's also a bit painful to get a 409 Conflict on GET 
request for current configurations when tasks are rebalancing especially given 
we may have multiple orchestration events running simultaneously. 

The requested change here to only rebalance for new/deleted tasks would be a 
nice improvement as it'd decrease the initialization time/improve availability. 
Other thoughts we've had on this would be to expose a new endpoint with bulk 
api support or a flag on configurations to indicate that new tasks should not 
immediately trigger a rebalance or may be scheduled in the future. 

Thanks! -Dan

> Connect: Do not restart connector and existing tasks on task-set change
> ---
>
> Key: KAFKA-5505
> URL: https://issues.apache.org/jira/browse/KAFKA-5505
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 0.10.2.1
>Reporter: Per Steffensen
>
> I am writing a connector with a frequently changing task-set. It is really 
> not working very well, because the connector and all existing tasks are 
> restarted when the set of tasks changes. E.g. if the connector is running 
> with 10 tasks, and an additional task is needed, the connector itself and all 
> 10 existing tasks are restarted, just to make the 11th task run also. My 
> tasks have a fairly heavy initialization, making it extra annoying. I would 
> like to see a change, introducing a "mode", where only new/deleted tasks are 
> started/stopped when notifying the system that the set of tasks changed 
> (calling context.requestTaskReconfiguration() - or something similar).
> Discussed this issue a little on d...@kafka.apache.org in the thread "Kafka 
> Connect: To much restarting with a SourceConnector with dynamic set of tasks"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-960) Upgrade Metrics to 3.x

2017-07-31 Thread Gennady Feldman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16107720#comment-16107720
 ] 

Gennady Feldman commented on KAFKA-960:
---

Latest metrics from dropwizard (new location) is 3.2.3 and has some interesting 
features for ops teams (reporting metrics via HTTP): 
http://metrics.dropwizard.io/3.2.3/getting-started.html#reporting-via-http

> Upgrade Metrics to 3.x
> --
>
> Key: KAFKA-960
> URL: https://issues.apache.org/jira/browse/KAFKA-960
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.1
>Reporter: Cosmin Lehene
>
> Now that metrics 3.0 has been released 
> (http://metrics.codahale.com/about/release-notes/) we can upgrade back



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-1944) Rename LogCleaner and related classes to LogCompactor

2017-07-31 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16107678#comment-16107678
 ] 

Ismael Juma commented on KAFKA-1944:


 By the way, we should also consider whether this change is worth it. After 
all, many are used to the existing names. Is the improvement worth the 
disruption?

> Rename LogCleaner and related classes to LogCompactor
> -
>
> Key: KAFKA-1944
> URL: https://issues.apache.org/jira/browse/KAFKA-1944
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Aravind Selvan
>  Labels: newbie
>
> Following a mailing list discussion:
> "the name LogCleaner is seriously misleading. Its more of a log compactor. 
> Deleting old logs happens elsewhere from what I've seen."
> Note that this may require renaming related classes, objects, configs and 
> metrics.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-5676) MockStreamsMetrics should be in o.a.k.test

2017-07-31 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang reassigned KAFKA-5676:


Assignee: Chanchal Singh

> MockStreamsMetrics should be in o.a.k.test
> --
>
> Key: KAFKA-5676
> URL: https://issues.apache.org/jira/browse/KAFKA-5676
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Chanchal Singh
>  Labels: newbie
>
> {{MockStreamsMetrics}}'s package should be `o.a.k.test` not 
> `o.a.k.streams.processor.internals`. 
> In addition, it should not require a {{Metrics}} parameter in its constructor 
> as it is only needed for its extended base class; the right way of mocking 
> should be implementing {{StreamsMetrics}} with mock behavior than extended a 
> real implementaion of {{StreamsMetricsImpl}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-1944) Rename LogCleaner and related classes to LogCompactor

2017-07-31 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16107672#comment-16107672
 ] 

Guozhang Wang commented on KAFKA-1944:
--

Also agree with [~hachikuji] that we should consider deprecating the config 
(but not renaming it instantaneously) and introduce a new config, along with 
the class name changes. In addition, the values taken should be considered; I'm 
thinking that:

0. Default value of the new config should be the same as the deprecated config, 
i.e. {{true}}
1. If both deprecated and new config values are specified, log a WARN entry and 
choose the new config value.
2. If only one of them is specified, choose its value.
3. If none of them are specified, then the default value, which should be the 
same, will be used.

> Rename LogCleaner and related classes to LogCompactor
> -
>
> Key: KAFKA-1944
> URL: https://issues.apache.org/jira/browse/KAFKA-1944
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Aravind Selvan
>  Labels: newbie
>
> Following a mailing list discussion:
> "the name LogCleaner is seriously misleading. Its more of a log compactor. 
> Deleting old logs happens elsewhere from what I've seen."
> Note that this may require renaming related classes, objects, configs and 
> metrics.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5677) Remove deprecated punctuate method

2017-07-31 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16107662#comment-16107662
 ] 

Guozhang Wang commented on KAFKA-5677:
--

I think we should not remove test coverage as long as the deprecated functions 
are not removed yet. On the other hand for some unit test util functions like 
{{XXTestDriver}}, various mocks etc they should be updated to use the new APIs 
instead of the deprecated ones, asap.

> Remove deprecated punctuate method
> --
>
> Key: KAFKA-5677
> URL: https://issues.apache.org/jira/browse/KAFKA-5677
> Project: Kafka
>  Issue Type: Task
>Reporter: Michal Borowiecki
>
> Task to track the removal of the punctuate method that got deprecated in 
> KAFKA-5233 and associated unit tests.
> (not sure the fix version number at this point)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5233) Changes to punctuate semantics (KIP-138)

2017-07-31 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16107653#comment-16107653
 ] 

Guozhang Wang commented on KAFKA-5233:
--

I agree with both of your regarding keeping the unit tests for the 
deprecated-but-not-removed functionalities. What I was mentioning is that for 
some unit test util functions like {{KStreamTestDriver}} it should be updated 
with the new APIs as it is not for the test coverage, but for helping with the 
unit tests and hence should be called with the latest stable APIs.

> Changes to punctuate semantics (KIP-138)
> 
>
> Key: KAFKA-5233
> URL: https://issues.apache.org/jira/browse/KAFKA-5233
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Michal Borowiecki
>Assignee: Michal Borowiecki
>  Labels: kip
> Fix For: 1.0.0
>
>
> This ticket is to track implementation of 
> [KIP-138: Change punctuate 
> semantics|https://cwiki.apache.org/confluence/display/KAFKA/KIP-138%3A+Change+punctuate+semantics]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-5621) The producer should retry expired batches when retries are enabled

2017-07-31 Thread Apurva Mehta (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apurva Mehta reassigned KAFKA-5621:
---

Assignee: Apurva Mehta

> The producer should retry expired batches when retries are enabled
> --
>
> Key: KAFKA-5621
> URL: https://issues.apache.org/jira/browse/KAFKA-5621
> Project: Kafka
>  Issue Type: Bug
>Reporter: Apurva Mehta
>Assignee: Apurva Mehta
> Fix For: 1.0.0
>
>
> Today, when a batch is expired in the accumulator, a {{TimeoutException}} is 
> raised to the user.
> It might be better the producer to retry the expired batch rather up to the 
> configured number of retries. This is more intuitive from the user's point of 
> view. 
> Further the proposed behavior makes it easier for applications like mirror 
> maker to provide ordering guarantees even when batches expire. Today, they 
> would resend the expired batch and it would get added to the back of the 
> queue, causing the output ordering to be different from the input ordering.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5674) max.connections.per.ip minimum value to be zero to allow IP address blocking

2017-07-31 Thread Viktor Somogyi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16107570#comment-16107570
 ] 

Viktor Somogyi commented on KAFKA-5674:
---

[~tmgstev], I'd assign this to myself if you don't mind or you don't work on it.

> max.connections.per.ip minimum value to be zero to allow IP address blocking
> 
>
> Key: KAFKA-5674
> URL: https://issues.apache.org/jira/browse/KAFKA-5674
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.11.0.0
>Reporter: Tristan Stevens
>
> Currently the max.connections.per.ip (KAFKA-1512) config has a minimum value 
> of 1, however, as suggested in 
> https://issues.apache.org/jira/browse/KAFKA-1512?focusedCommentId=14051914, 
> having this with a minimum value of zero would allow IP-based filtering of 
> inbound connections (effectively prohibit those IP addresses from connecting 
> altogether).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5678) When the broker graceful shutdown occurs, the producer side sends timeout.

2017-07-31 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16107569#comment-16107569
 ] 

Jiangjie Qin commented on KAFKA-5678:
-

[~Json Tu] [~cuiyang] In DelayedProduce.tryComplete(), it will complete the 
delayed produce immediately when the leader replica is not local. So there 
should be no difference between calling forceComplete() and calling 
tryComplete() in the shutdown case. When the broker shuts down, all the 
producer should immediately receive a produce response with 
NOT_LEADER_FOR_PARTITION error code for all the partitions.

One thing worth checking is that during controlled shutdown, sometimes the 
controlled shutdown request itself can take very long to complete, especially 
when there are many requests pending in the broker. So it would be good to see 
how long did the controlled shutdown request itself take. This should be 
visible in the request logger at debug level.

> When the broker graceful shutdown occurs, the producer side sends timeout.
> --
>
> Key: KAFKA-5678
> URL: https://issues.apache.org/jira/browse/KAFKA-5678
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0, 0.10.0.0, 0.11.0.0
>Reporter: tuyang
>
> Test environment as follows.
> 1.Kafka version:0.9.0.1
> 2.Cluster with 3 broker which with broker id A,B,C 
> 3.Topic with 6 partitions with 2 replicas,with 2 leader partitions at each 
> broker.
> We can reproduce the problem as follows.
> 1.we send message as quickly as possible with ack -1.
> 2.if partition p0's leader is on broker A and we graceful shutdown broker 
> A,but we send a message to p0 before the leader is reelect, so the message 
> can be appended to the leader replica successful, but if the follower replica 
> not catch it as quickly as possible, so the shutting down broker will create 
> a delayProduce for this request to wait complete until request.timeout.ms .
> 3.because of the controllerShutdown request from broker A, then the p0 
> partition leader will reelect
> , then the replica on broker A will become follower before complete shut 
> down.then the delayProduce will not be trigger to complete until expire. 
> 4.if broker A shutdown cost too long, then the producer will get response 
> after request.timeout.ms, which results in increase the producer send latency 
> when we are restarting broker one by one.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-1944) Rename LogCleaner and related classes to LogCompactor

2017-07-31 Thread Pranav Maniar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16107346#comment-16107346
 ] 

Pranav Maniar commented on KAFKA-1944:
--

Thanks [~becket_qin] , I will go through wiki page and try to create KIP.
I see that I can't access KIP-template. If you can assign permission it will be 
great. Meanwhile I will also send mail to d...@kafka.apache.org as mentioned on 
wiki page. 

> Rename LogCleaner and related classes to LogCompactor
> -
>
> Key: KAFKA-1944
> URL: https://issues.apache.org/jira/browse/KAFKA-1944
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Aravind Selvan
>  Labels: newbie
>
> Following a mailing list discussion:
> "the name LogCleaner is seriously misleading. Its more of a log compactor. 
> Deleting old logs happens elsewhere from what I've seen."
> Note that this may require renaming related classes, objects, configs and 
> metrics.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-5653) Add new API methods to KTable

2017-07-31 Thread Damian Guy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Damian Guy reassigned KAFKA-5653:
-

Assignee: Damian Guy

> Add new API methods to KTable
> -
>
> Key: KAFKA-5653
> URL: https://issues.apache.org/jira/browse/KAFKA-5653
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Damian Guy
>Assignee: Damian Guy
>
> placeholder until API finalized



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-5655) Add new API methods to KGroupedTable

2017-07-31 Thread Damian Guy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Damian Guy reassigned KAFKA-5655:
-

Assignee: Damian Guy

> Add new API methods to KGroupedTable
> 
>
> Key: KAFKA-5655
> URL: https://issues.apache.org/jira/browse/KAFKA-5655
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Damian Guy
>Assignee: Damian Guy
>
> Placeholder until API finalized



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (KAFKA-5678) When the broker graceful shutdown occurs, the producer side sends timeout.

2017-07-31 Thread cuiyang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16107288#comment-16107288
 ] 

cuiyang edited comment on KAFKA-5678 at 7/31/17 1:14 PM:
-

Unfortunately, this issue can still be reproduced on our Kafka cluster even if 
we have already upgraded it to 0.10.2.1. Our producer occurs "Broker timeout" 
error when we restart the brokers one by one, and the only thing our producer 
can do is throws the timeout record away because our producer is invoked by our 
Web Server.

We set the ACKs of our producer to -1,  but it seems to not working. So I think 
this issue still exists in 0.10.x version.

I also think we should return the response of DeleyProducer to producer 
immediately once leader switch happened, so producer can get to know what 
happened in time, and make a retry after "back.off" time without receiving 
request timeout.

  Leader Reelection  ---   DelayProducer timeout
--- Broker Shutdown complete


was (Author: cuiyang):
Unfortunately, this issue can still be reproduced on our Kafka cluster even if 
we have already upgraded it to 0.10.2.1. Our producer occurs "Broker timeout" 
error when we restart the brokers one by one, and the only thing our producer 
can do is throws the timeout record away because our producer is invoked by our 
Web Server.

We set the ACKs of our producer to -1,  but it seems to not working. So I think 
this issue still exists in 0.10.x version.

I also think we should return the response of DeleyProducer to producer 
immediately once leader switch happened, so that producer can get know what 
happened in time, and make a retry after "back.off" time without receiving 
request timeout.

  Leader Reelection  ---   DelayProducer timeout
--- Broker Shutdown complete

> When the broker graceful shutdown occurs, the producer side sends timeout.
> --
>
> Key: KAFKA-5678
> URL: https://issues.apache.org/jira/browse/KAFKA-5678
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0, 0.10.0.0, 0.11.0.0
>Reporter: tuyang
>
> Test environment as follows.
> 1.Kafka version:0.9.0.1
> 2.Cluster with 3 broker which with broker id A,B,C 
> 3.Topic with 6 partitions with 2 replicas,with 2 leader partitions at each 
> broker.
> We can reproduce the problem as follows.
> 1.we send message as quickly as possible with ack -1.
> 2.if partition p0's leader is on broker A and we graceful shutdown broker 
> A,but we send a message to p0 before the leader is reelect, so the message 
> can be appended to the leader replica successful, but if the follower replica 
> not catch it as quickly as possible, so the shutting down broker will create 
> a delayProduce for this request to wait complete until request.timeout.ms .
> 3.because of the controllerShutdown request from broker A, then the p0 
> partition leader will reelect
> , then the replica on broker A will become follower before complete shut 
> down.then the delayProduce will not be trigger to complete until expire. 
> 4.if broker A shutdown cost too long, then the producer will get response 
> after request.timeout.ms, which results in increase the producer send latency 
> when we are restarting broker one by one.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (KAFKA-5678) When the broker graceful shutdown occurs, the producer side sends timeout.

2017-07-31 Thread cuiyang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16107288#comment-16107288
 ] 

cuiyang edited comment on KAFKA-5678 at 7/31/17 1:13 PM:
-

Unfortunately, this issue can still be reproduced on our Kafka cluster even if 
we have already upgraded it to 0.10.2.1. Our producer occurs "Broker timeout" 
error when we restart the brokers one by one, and the only thing our producer 
can do is throws the timeout record away because our producer is invoked by our 
Web Server.

We set the ACKs of our producer to -1,  but it seems to not working. So I think 
this issue still exists in 0.10.x version.

I also think we should return the response of DeleyProducer to producer 
immediately once leader switch happened, so that producer can get know what 
happened in time, and make a retry after "back.off" time without receiving 
request timeout.

  Leader Reelection  ---   DelayProducer timeout
--- Broker Shutdown complete


was (Author: cuiyang):
Unfortunately, this issue can still be reproduced on our Kafka cluster even if 
we have already upgraded it to 0.10.2.1. Our producer occurs "Broker timeout" 
error when we restart the brokers one by one, and the only thing our producer 
can do is throws the timeout record away because our producer is invoked by our 
Web Server.

We set the ACKs of our producer to -1,  but it seems to not working. So I think 
this issue still exists in 0.10.x version.

I also think we should return the response of DeleyProducer to producer 
immediately once leader switch happened, so that producer can get know what 
happened in time, and make a retry after "back.off" time without receiving 
request timeout.

Leader Reelection  --- DelayProducer timeout--- 
Broker Shutdown complete

> When the broker graceful shutdown occurs, the producer side sends timeout.
> --
>
> Key: KAFKA-5678
> URL: https://issues.apache.org/jira/browse/KAFKA-5678
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0, 0.10.0.0, 0.11.0.0
>Reporter: tuyang
>
> Test environment as follows.
> 1.Kafka version:0.9.0.1
> 2.Cluster with 3 broker which with broker id A,B,C 
> 3.Topic with 6 partitions with 2 replicas,with 2 leader partitions at each 
> broker.
> We can reproduce the problem as follows.
> 1.we send message as quickly as possible with ack -1.
> 2.if partition p0's leader is on broker A and we graceful shutdown broker 
> A,but we send a message to p0 before the leader is reelect, so the message 
> can be appended to the leader replica successful, but if the follower replica 
> not catch it as quickly as possible, so the shutting down broker will create 
> a delayProduce for this request to wait complete until request.timeout.ms .
> 3.because of the controllerShutdown request from broker A, then the p0 
> partition leader will reelect
> , then the replica on broker A will become follower before complete shut 
> down.then the delayProduce will not be trigger to complete until expire. 
> 4.if broker A shutdown cost too long, then the producer will get response 
> after request.timeout.ms, which results in increase the producer send latency 
> when we are restarting broker one by one.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (KAFKA-5678) When the broker graceful shutdown occurs, the producer side sends timeout.

2017-07-31 Thread cuiyang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16107288#comment-16107288
 ] 

cuiyang edited comment on KAFKA-5678 at 7/31/17 1:12 PM:
-

Unfortunately, this issue can still be reproduced on our Kafka cluster even if 
we have already upgraded it to 0.10.2.1. Our producer occurs "Broker timeout" 
error when we restart the brokers one by one, and the only thing our producer 
can do is throws the timeout record away because our producer is invoked by our 
Web Server.

We set the ACKs of our producer to -1,  but it seems to not working. So I think 
this issue still exists in 0.10.x version.

I also think we should return the response of DeleyProducer to producer 
immediately once leader switch happened, so that producer can get know what 
happened in time, and make a retry after "back.off" time without receiving 
request timeout.

Leader Reelection  --- DelayProducer timeout--- 
Broker Shutdown complete


was (Author: cuiyang):
Unfortunately, this issue can still be reproduced on our Kafka cluster even if 
we have already upgraded it to 0.10.2.1. Our producer occurs "Broker timeout" 
error when we restart the brokers one by one, and the only thing our producer 
can do is throws the timeout record away because our producer is invoked by our 
Web Server.

We set the ACKs of our producer to -1,  but it seems to not working. So I think 
this issue still exists in 0.10.x version.

I also think we should return the response of DeleyProducer to producer 
immediately once leader switch happened, so that producer can get know what 
happened in time, and make a retry after "back.off" time without receiving 
request timeout.

Leader Reelection   DelayProducer timeoutBroker Shutdown complete
   ||  ||
--->
 Timeline

> When the broker graceful shutdown occurs, the producer side sends timeout.
> --
>
> Key: KAFKA-5678
> URL: https://issues.apache.org/jira/browse/KAFKA-5678
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0, 0.10.0.0, 0.11.0.0
>Reporter: tuyang
>
> Test environment as follows.
> 1.Kafka version:0.9.0.1
> 2.Cluster with 3 broker which with broker id A,B,C 
> 3.Topic with 6 partitions with 2 replicas,with 2 leader partitions at each 
> broker.
> We can reproduce the problem as follows.
> 1.we send message as quickly as possible with ack -1.
> 2.if partition p0's leader is on broker A and we graceful shutdown broker 
> A,but we send a message to p0 before the leader is reelect, so the message 
> can be appended to the leader replica successful, but if the follower replica 
> not catch it as quickly as possible, so the shutting down broker will create 
> a delayProduce for this request to wait complete until request.timeout.ms .
> 3.because of the controllerShutdown request from broker A, then the p0 
> partition leader will reelect
> , then the replica on broker A will become follower before complete shut 
> down.then the delayProduce will not be trigger to complete until expire. 
> 4.if broker A shutdown cost too long, then the producer will get response 
> after request.timeout.ms, which results in increase the producer send latency 
> when we are restarting broker one by one.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (KAFKA-5678) When the broker graceful shutdown occurs, the producer side sends timeout.

2017-07-31 Thread cuiyang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16107288#comment-16107288
 ] 

cuiyang edited comment on KAFKA-5678 at 7/31/17 1:11 PM:
-

Unfortunately, this issue can still be reproduced on our Kafka cluster even if 
we have already upgraded it to 0.10.2.1. Our producer occurs "Broker timeout" 
error when we restart the brokers one by one, and the only thing our producer 
can do is throws the timeout record away because our producer is invoked by our 
Web Server.

We set the ACKs of our producer to -1,  but it seems to not working. So I think 
this issue still exists in 0.10.x version.

I also think we should return the response of DeleyProducer to producer 
immediately once leader switch happened, so that producer can get know what 
happened in time, and make a retry after "back.off" time without receiving 
request timeout.

Leader Reelection   DelayProducer timeoutBroker Shutdown complete
   ||  ||
--->
 Timeline


was (Author: cuiyang):
Unfortunately, this issue can still be reproduced on our Kafka cluster even if 
we have already upgraded it to 0.10.2.1. Our producer occurs "Broker timeout" 
error when we restart the brokers one by one, and the only thing our producer 
can do is throws the timeout record away because our producer is invoked by our 
Web Server.

We set the ACKs of our producer to -1,  but it seems to not working. So I think 
this issue still exists in 0.10.x version.

I also think we should return the response of DeleyProducer to producer 
immediately once leader switch happened, so that producer can get know what 
happened in time, and make a retry after "back.off" time without receiving 
request timeout.

Leader Reelection  DelayProducer timeoutBroker Shutdown 
complete
   | ||
--->
 Timeline

> When the broker graceful shutdown occurs, the producer side sends timeout.
> --
>
> Key: KAFKA-5678
> URL: https://issues.apache.org/jira/browse/KAFKA-5678
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0, 0.10.0.0, 0.11.0.0
>Reporter: tuyang
>
> Test environment as follows.
> 1.Kafka version:0.9.0.1
> 2.Cluster with 3 broker which with broker id A,B,C 
> 3.Topic with 6 partitions with 2 replicas,with 2 leader partitions at each 
> broker.
> We can reproduce the problem as follows.
> 1.we send message as quickly as possible with ack -1.
> 2.if partition p0's leader is on broker A and we graceful shutdown broker 
> A,but we send a message to p0 before the leader is reelect, so the message 
> can be appended to the leader replica successful, but if the follower replica 
> not catch it as quickly as possible, so the shutting down broker will create 
> a delayProduce for this request to wait complete until request.timeout.ms .
> 3.because of the controllerShutdown request from broker A, then the p0 
> partition leader will reelect
> , then the replica on broker A will become follower before complete shut 
> down.then the delayProduce will not be trigger to complete until expire. 
> 4.if broker A shutdown cost too long, then the producer will get response 
> after request.timeout.ms, which results in increase the producer send latency 
> when we are restarting broker one by one.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5678) When the broker graceful shutdown occurs, the producer side sends timeout.

2017-07-31 Thread cuiyang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16107288#comment-16107288
 ] 

cuiyang commented on KAFKA-5678:


Unfortunately, this issue can still be reproduced on our Kafka cluster even if 
we have already upgraded it to 0.10.2.1. Our producer occurs "Broker timeout" 
error when we restart the brokers one by one, and the only thing our producer 
can do is throws the timeout record away because our producer is invoked by our 
Web Server.

We set the ACKs of our producer to -1,  but it seems to not working. So I think 
this issue still exists in 0.10.x version.

I also think we should return the response of DeleyProducer to producer 
immediately once leader switch happened, so that producer can get know what 
happened in time, and make a retry after "back.off" time without receiving 
request timeout.

Leader Reelection  DelayProducer timeoutBroker Shutdown 
complete
   | ||
--->
 Timeline

> When the broker graceful shutdown occurs, the producer side sends timeout.
> --
>
> Key: KAFKA-5678
> URL: https://issues.apache.org/jira/browse/KAFKA-5678
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0, 0.10.0.0, 0.11.0.0
>Reporter: tuyang
>
> Test environment as follows.
> 1.Kafka version:0.9.0.1
> 2.Cluster with 3 broker which with broker id A,B,C 
> 3.Topic with 6 partitions with 2 replicas,with 2 leader partitions at each 
> broker.
> We can reproduce the problem as follows.
> 1.we send message as quickly as possible with ack -1.
> 2.if partition p0's leader is on broker A and we graceful shutdown broker 
> A,but we send a message to p0 before the leader is reelect, so the message 
> can be appended to the leader replica successful, but if the follower replica 
> not catch it as quickly as possible, so the shutting down broker will create 
> a delayProduce for this request to wait complete until request.timeout.ms .
> 3.because of the controllerShutdown request from broker A, then the p0 
> partition leader will reelect
> , then the replica on broker A will become follower before complete shut 
> down.then the delayProduce will not be trigger to complete until expire. 
> 4.if broker A shutdown cost too long, then the producer will get response 
> after request.timeout.ms, which results in increase the producer send latency 
> when we are restarting broker one by one.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5676) MockStreamsMetrics should be in o.a.k.test

2017-07-31 Thread Chanchal Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16107162#comment-16107162
 ] 

Chanchal Singh commented on KAFKA-5676:
---

I am new to open source contribution and want to start with this issue. please 
assign it to me. not able to assign it to myself.


> MockStreamsMetrics should be in o.a.k.test
> --
>
> Key: KAFKA-5676
> URL: https://issues.apache.org/jira/browse/KAFKA-5676
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>  Labels: newbie
>
> {{MockStreamsMetrics}}'s package should be `o.a.k.test` not 
> `o.a.k.streams.processor.internals`. 
> In addition, it should not require a {{Metrics}} parameter in its constructor 
> as it is only needed for its extended base class; the right way of mocking 
> should be implementing {{StreamsMetrics}} with mock behavior than extended a 
> real implementaion of {{StreamsMetricsImpl}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-2729) Cached zkVersion not equal to that in zookeeper, broker not recovering.

2017-07-31 Thread Pablo (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16107042#comment-16107042
 ] 

Pablo commented on KAFKA-2729:
--

Happened to us with timeout workaround:

{code:java}
zookeeper.connection.timeout.ms=1
zookeeper.session.timeout.ms=1
{code}

On AWS eu-west-1 on saturday using a 0.10.2.0 cluster of 3 brokers and 3 zks 
with message format 0.8.2.0.



> Cached zkVersion not equal to that in zookeeper, broker not recovering.
> ---
>
> Key: KAFKA-2729
> URL: https://issues.apache.org/jira/browse/KAFKA-2729
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
>Reporter: Danil Serdyuchenko
>
> After a small network wobble where zookeeper nodes couldn't reach each other, 
> we started seeing a large number of undereplicated partitions. The zookeeper 
> cluster recovered, however we continued to see a large number of 
> undereplicated partitions. Two brokers in the kafka cluster were showing this 
> in the logs:
> {code}
> [2015-10-27 11:36:00,888] INFO Partition 
> [__samza_checkpoint_event-creation_1,3] on broker 5: Shrinking ISR for 
> partition [__samza_checkpoint_event-creation_1,3] from 6,5 to 5 
> (kafka.cluster.Partition)
> [2015-10-27 11:36:00,891] INFO Partition 
> [__samza_checkpoint_event-creation_1,3] on broker 5: Cached zkVersion [66] 
> not equal to that in zookeeper, skip updating ISR (kafka.cluster.Partition)
> {code}
> For all of the topics on the effected brokers. Both brokers only recovered 
> after a restart. Our own investigation yielded nothing, I was hoping you 
> could shed some light on this issue. Possibly if it's related to: 
> https://issues.apache.org/jira/browse/KAFKA-1382 , however we're using 
> 0.8.2.1.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-2729) Cached zkVersion not equal to that in zookeeper, broker not recovering.

2017-07-31 Thread Dan (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16106965#comment-16106965
 ] 

Dan commented on KAFKA-2729:


Happened in 0.11.0.0 as well. Had to restart the broker to bring it back to 
operational state.

> Cached zkVersion not equal to that in zookeeper, broker not recovering.
> ---
>
> Key: KAFKA-2729
> URL: https://issues.apache.org/jira/browse/KAFKA-2729
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
>Reporter: Danil Serdyuchenko
>
> After a small network wobble where zookeeper nodes couldn't reach each other, 
> we started seeing a large number of undereplicated partitions. The zookeeper 
> cluster recovered, however we continued to see a large number of 
> undereplicated partitions. Two brokers in the kafka cluster were showing this 
> in the logs:
> {code}
> [2015-10-27 11:36:00,888] INFO Partition 
> [__samza_checkpoint_event-creation_1,3] on broker 5: Shrinking ISR for 
> partition [__samza_checkpoint_event-creation_1,3] from 6,5 to 5 
> (kafka.cluster.Partition)
> [2015-10-27 11:36:00,891] INFO Partition 
> [__samza_checkpoint_event-creation_1,3] on broker 5: Cached zkVersion [66] 
> not equal to that in zookeeper, skip updating ISR (kafka.cluster.Partition)
> {code}
> For all of the topics on the effected brokers. Both brokers only recovered 
> after a restart. Our own investigation yielded nothing, I was hoping you 
> could shed some light on this issue. Possibly if it's related to: 
> https://issues.apache.org/jira/browse/KAFKA-1382 , however we're using 
> 0.8.2.1.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5680) Don't materliaze physical state stores in KTable filter/map etc operations

2017-07-31 Thread Damian Guy (JIRA)
Damian Guy created KAFKA-5680:
-

 Summary: Don't materliaze physical state stores in KTable 
filter/map etc operations
 Key: KAFKA-5680
 URL: https://issues.apache.org/jira/browse/KAFKA-5680
 Project: Kafka
  Issue Type: Bug
Reporter: Damian Guy


Presently, for IQ, we will materialize physical state stores for 
{{KTable#filter}} {{KTable#mapValues}} etc operations if the user provides a 
{{queryableStoreName}}. This results in changelog topics, memory, disk space 
that we can avoid by providing a view on the original state store.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)