[jira] [Commented] (KAFKA-2806) Allow Kafka System Tests under different JDK versions

2015-11-11 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15000224#comment-15000224
 ] 

Ismael Juma commented on KAFKA-2806:


I don't think this is a Scala-specific issue. In general, if you compile with 
Java 8, there is no guarantee that it will run under Java 7 due to standard 
library changes.

Having said that, we definitely want to support running the system tests with 
Java 8 so that we can verify that they pass with the current version (and the 
only version receiving security updates from Oracle).

> Allow Kafka System Tests under different JDK versions
> -
>
> Key: KAFKA-2806
> URL: https://issues.apache.org/jira/browse/KAFKA-2806
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
> Fix For: 0.9.0.1
>
>
> Currently the Kafka system tests (using ducktape) uses JDK7 as the runtime 
> inside vagrant processes. However, there are some known issues with executing 
> Java8 builds with JDK7 under Scala:
> https://gist.github.com/AlainODea/1375759b8720a3f9f094
> http://stackoverflow.com/questions/24448723/java-error-java-util-concurrent-concurrenthashmap-keyset
> We need to be able to config the system tests to execute different JDK 
> versions in the virtual machines.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-2806) Allow Kafka System Tests under different JDK versions

2015-11-11 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15000224#comment-15000224
 ] 

Ismael Juma edited comment on KAFKA-2806 at 11/11/15 10:54 AM:
---

I don't think this is a Scala-specific issue. In general, if you compile with 
Java 8, there is no guarantee that it will run under Java 7 due to standard 
library changes.

Having said that, we definitely want to support running the system tests with 
Java 8 so that we can verify that they pass with the current Java version (and 
the only version receiving security updates from Oracle).


was (Author: ijuma):
I don't think this is a Scala-specific issue. In general, if you compile with 
Java 8, there is no guarantee that it will run under Java 7 due to standard 
library changes.

Having said that, we definitely want to support running the system tests with 
Java 8 so that we can verify that they pass with the current version (and the 
only version receiving security updates from Oracle).

> Allow Kafka System Tests under different JDK versions
> -
>
> Key: KAFKA-2806
> URL: https://issues.apache.org/jira/browse/KAFKA-2806
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
> Fix For: 0.9.0.1
>
>
> Currently the Kafka system tests (using ducktape) uses JDK7 as the runtime 
> inside vagrant processes. However, there are some known issues with executing 
> Java8 builds with JDK7 under Scala:
> https://gist.github.com/AlainODea/1375759b8720a3f9f094
> http://stackoverflow.com/questions/24448723/java-error-java-util-concurrent-concurrenthashmap-keyset
> We need to be able to config the system tests to execute different JDK 
> versions in the virtual machines.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2771: Added rolling upgrade system test ...

2015-11-11 Thread benstopford
GitHub user benstopford opened a pull request:

https://github.com/apache/kafka/pull/496

KAFKA-2771: Added rolling upgrade system test (docktape) for SSL

This still needs a final run of the full system test suite. I've run this 
test and the replication_test.py (which is the other test most affected) on 
Ec2. Both pass. 

Not totally happy with the logic around listener selection but the SASL 
tests moved over a model with a single port which doesn't work well when 
performing a rolling bounce. 

To make this stable on Ec2 I increased the consumer timeouts and use a 
unique consumer group. 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/benstopford/kafka security-upgrade-test

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/496.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #496


commit ee34dbeb0667f081f77ac32031b571724ecd819b
Author: Ben Stopford 
Date:   2015-11-09T00:09:17Z

KAFKA-2771: Added rolling upgrade system test (docktape) for SSL




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2771) Add SSL Rolling Upgrade Test to System Tests

2015-11-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15000454#comment-15000454
 ] 

ASF GitHub Bot commented on KAFKA-2771:
---

GitHub user benstopford opened a pull request:

https://github.com/apache/kafka/pull/496

KAFKA-2771: Added rolling upgrade system test (docktape) for SSL

This still needs a final run of the full system test suite. I've run this 
test and the replication_test.py (which is the other test most affected) on 
Ec2. Both pass. 

Not totally happy with the logic around listener selection but the SASL 
tests moved over a model with a single port which doesn't work well when 
performing a rolling bounce. 

To make this stable on Ec2 I increased the consumer timeouts and use a 
unique consumer group. 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/benstopford/kafka security-upgrade-test

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/496.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #496


commit ee34dbeb0667f081f77ac32031b571724ecd819b
Author: Ben Stopford 
Date:   2015-11-09T00:09:17Z

KAFKA-2771: Added rolling upgrade system test (docktape) for SSL




> Add SSL Rolling Upgrade Test to System Tests
> 
>
> Key: KAFKA-2771
> URL: https://issues.apache.org/jira/browse/KAFKA-2771
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Reporter: Ben Stopford
>Assignee: Ben Stopford
>
> Ensure we can perform a rolling upgrade to enable SSL on a running cluster
> *Method*
> - Start with 0.9.0 cluster with SSL disabled
> - Upgrade to Client and Inter-Broker ports to SSL (This will take two rounds 
> bounces. One to open the SSL port and one to close the PLAINTEXT port)
> - Ensure you can produce  (acks = -1) and consume during the process. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2790) Kafka 0.9.0 doc improvement

2015-11-11 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2790:
---
Priority: Blocker  (was: Major)

> Kafka 0.9.0 doc improvement
> ---
>
> Key: KAFKA-2790
> URL: https://issues.apache.org/jira/browse/KAFKA-2790
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Gwen Shapira
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Observed a few issues after uploading the 0.9.0 docs to the Apache site 
> (http://kafka.apache.org/090/documentation.html).
> 1. There are a few places still mentioning 0.8.2.
> docs/api.html:We are in the process of rewritting the JVM clients for Kafka. 
> As of 0.8.2 Kafka includes a newly rewritten Java producer. The next release 
> will include an equivalent Java consumer. These new clients are meant to 
> supplant the existing Scala clients, but for compatability they will co-exist 
> for some time. These clients are available in a seperate jar with minimal 
> dependencies, while the old Scala clients remain packaged with the server.
> docs/api.html:As of the 0.8.2 release we encourage all new development to use 
> the new Java producer. This client is production tested and generally both 
> faster and more fully featured than the previous Scala client. You can use 
> this client by adding a dependency on the client jar using the following 
> example maven co-ordinates (you can change the version numbers with new 
> releases):
> docs/api.html:0.8.2.0
> docs/ops.html:The partition reassignment tool does not have the ability to 
> automatically generate a reassignment plan for decommissioning brokers yet. 
> As such, the admin has to come up with a reassignment plan to move the 
> replica for all partitions hosted on the broker to be decommissioned, to the 
> rest of the brokers. This can be relatively tedious as the reassignment needs 
> to ensure that all the replicas are not moved from the decommissioned broker 
> to only one other broker. To make this process effortless, we plan to add 
> tooling support for decommissioning brokers in 0.8.2.
> docs/quickstart.html: href="https://www.apache.org/dyn/closer.cgi?path=/kafka/0.8.2.0/kafka_2.10-0.8.2.0.tgz";
>  title="Kafka downloads">Download the 0.8.2.0 release and un-tar it.
> docs/quickstart.html:> tar -xzf kafka_2.10-0.8.2.0.tgz
> docs/quickstart.html:> cd kafka_2.10-0.8.2.0
> 2. The generated config tables (broker, producer and consumer) don't have the 
> proper table frames.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2805) RecordAccumulator request timeout not enforced when all brokers are gone

2015-11-11 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15000633#comment-15000633
 ] 

Jun Rao commented on KAFKA-2805:


[~mgharat], I am not sure if the current logic works. If the leader is not null 
and is for a broker not connectable, in Sender.run(), the partitions for this 
leader will be ready, but are drainable since the leader is not connectable. 
So, messages in those partitions will never timeout in the current logic. 

In Jason's test, you can get into the above situation when the only broker is 
killed since metadata won't be refreshed after the broker is down. In your 
test, if you provided multiple brokers in broker list, things are a bit 
different. The producer will be able to refresh metadata from other brokers and 
see the leader is gone. In this case, the producer will see a null leader. 
That's probably why you don't see the issue in your test. In both case, the 
effect is pretty much the same---we can't send the partitions' data. So, the 
simplest solution is probably to remove the null check on leader in 
abortExpiredBatches(). If the leader can't be connected for a long period of 
time, the partitions are guaranteed to be drained. If the leader is 
connectable, but the send fails (e.g., due to NotLeader), lastAttemptMs will be 
updated and we will go through the retries. Does that sound reasonable?


> RecordAccumulator request timeout not enforced when all brokers are gone
> 
>
> Key: KAFKA-2805
> URL: https://issues.apache.org/jira/browse/KAFKA-2805
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Mayuresh Gharat
>
> When no brokers are left in the cluster, the producer seems not to enforce 
> the request timeout as expected.
> From the user mailing list, the null check in batch expiration in 
> RecordAccumulator seems questionable: 
> https://github.com/apache/kafka/blob/ae5a5d7c08bb634576a414f6f2864c5b8a7e58a3/clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java#L220.
>  
> If this is correct behavior, it is probably worthwhile clarifying the purpose 
> of the check in a comment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2805) RecordAccumulator request timeout not enforced when all brokers are gone

2015-11-11 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2805:
---
Fix Version/s: 0.9.0.0

Marking this as an 0.9.0.0 blocker for now.

> RecordAccumulator request timeout not enforced when all brokers are gone
> 
>
> Key: KAFKA-2805
> URL: https://issues.apache.org/jira/browse/KAFKA-2805
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Mayuresh Gharat
> Fix For: 0.9.0.0
>
>
> When no brokers are left in the cluster, the producer seems not to enforce 
> the request timeout as expected.
> From the user mailing list, the null check in batch expiration in 
> RecordAccumulator seems questionable: 
> https://github.com/apache/kafka/blob/ae5a5d7c08bb634576a414f6f2864c5b8a7e58a3/clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java#L220.
>  
> If this is correct behavior, it is probably worthwhile clarifying the purpose 
> of the check in a comment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2805) RecordAccumulator request timeout not enforced when all brokers are gone

2015-11-11 Thread Luke Steensen (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15000651#comment-15000651
 ] 

Luke Steensen commented on KAFKA-2805:
--

[~mgharat] To answer your question from the mailing list, I was only running 
one broker in the cluster for this case. My intention was to do a simple test 
of the new behavior before pushing the new version to one of our deployed 
environments. 

It's worth noting that you can get this same behavior with the following 
producer config:
{code}
retries=0
max.block.ms=1000
{code}

I will continue testing, but it does seem that the configs are respected when 
running more than one broker. 

Thanks for taking a look!
Luke

> RecordAccumulator request timeout not enforced when all brokers are gone
> 
>
> Key: KAFKA-2805
> URL: https://issues.apache.org/jira/browse/KAFKA-2805
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Mayuresh Gharat
> Fix For: 0.9.0.0
>
>
> When no brokers are left in the cluster, the producer seems not to enforce 
> the request timeout as expected.
> From the user mailing list, the null check in batch expiration in 
> RecordAccumulator seems questionable: 
> https://github.com/apache/kafka/blob/ae5a5d7c08bb634576a414f6f2864c5b8a7e58a3/clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java#L220.
>  
> If this is correct behavior, it is probably worthwhile clarifying the purpose 
> of the check in a comment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2763: better stream task assignment

2015-11-11 Thread ymatsuda
GitHub user ymatsuda opened a pull request:

https://github.com/apache/kafka/pull/497

KAFKA-2763: better stream task assignment

@guozhangwang 

When the rebalance happens each consumer reports the following information 
to the coordinator.
* Client UUID (a unique id assigned to an instance of KafkaStreaming) 
* Task ids of previously running tasks
* Task ids of valid local states on the client's state directory

TaskAssignor does the following
* Assign a task to a client which was running it previously. If there is no 
such client, assign a task to a client which has its valid local state.
* Try to balance the load among stream threads.
  * A client may have more than one stream threads. The assignor tries to 
assign tasks to a client proportionally to the number of threads.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ymatsuda/kafka task_assignment

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/497.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #497


commit 0e4cd31d3f9f055aa8db5917bcb30f1dbc3da984
Author: Yasuhiro Matsuda 
Date:   2015-11-11T17:46:57Z

better task assignment




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2763) Reduce stream task migrations and initialization costs

2015-11-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15000798#comment-15000798
 ] 

ASF GitHub Bot commented on KAFKA-2763:
---

GitHub user ymatsuda opened a pull request:

https://github.com/apache/kafka/pull/497

KAFKA-2763: better stream task assignment

@guozhangwang 

When the rebalance happens each consumer reports the following information 
to the coordinator.
* Client UUID (a unique id assigned to an instance of KafkaStreaming) 
* Task ids of previously running tasks
* Task ids of valid local states on the client's state directory

TaskAssignor does the following
* Assign a task to a client which was running it previously. If there is no 
such client, assign a task to a client which has its valid local state.
* Try to balance the load among stream threads.
  * A client may have more than one stream threads. The assignor tries to 
assign tasks to a client proportionally to the number of threads.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ymatsuda/kafka task_assignment

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/497.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #497


commit 0e4cd31d3f9f055aa8db5917bcb30f1dbc3da984
Author: Yasuhiro Matsuda 
Date:   2015-11-11T17:46:57Z

better task assignment




> Reduce stream task migrations and initialization costs
> --
>
> Key: KAFKA-2763
> URL: https://issues.apache.org/jira/browse/KAFKA-2763
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Affects Versions: 0.9.0.0
>Reporter: Yasuhiro Matsuda
>
> Stream task assignment is not aware of either the previous task assignment or 
> local states of participating clients. By making the assignment logic aware 
> of them, we can reduce task migrations and initialization cost.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2808) Support auto.create.topics.enable with automatic WRITE permissions for creator

2015-11-11 Thread Thomas Graves (JIRA)
Thomas Graves created KAFKA-2808:


 Summary: Support auto.create.topics.enable with automatic WRITE 
permissions for creator 
 Key: KAFKA-2808
 URL: https://issues.apache.org/jira/browse/KAFKA-2808
 Project: Kafka
  Issue Type: Improvement
  Components: core
Affects Versions: 0.9.0.0
Reporter: Thomas Graves


we have a user that wants to use the topic auto create functionality and 
automatically have it give WRITE permissions so that they don't have to 
explicitly create and grant acls ahead of time or make explicit call. 

it seems like if you have auto.create.topics.enable enabled and the user has 
CREATE acls we could automatically just give WRITE acls to the user who creates 
the topic. Without that the auto create topics with acls doesn't add much 
benefit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2808) Support auto.create.topics.enable with automatic WRITE permissions for creator

2015-11-11 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15000828#comment-15000828
 ] 

Thomas Graves commented on KAFKA-2808:
--

ccd [~parth.brahmbhatt]

> Support auto.create.topics.enable with automatic WRITE permissions for 
> creator 
> ---
>
> Key: KAFKA-2808
> URL: https://issues.apache.org/jira/browse/KAFKA-2808
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Thomas Graves
>
> we have a user that wants to use the topic auto create functionality and 
> automatically have it give WRITE permissions so that they don't have to 
> explicitly create and grant acls ahead of time or make explicit call. 
> it seems like if you have auto.create.topics.enable enabled and the user has 
> CREATE acls we could automatically just give WRITE acls to the user who 
> creates the topic. Without that the auto create topics with acls doesn't add 
> much benefit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2808) Support auto.create.topics.enable with automatic WRITE permissions for creator

2015-11-11 Thread Parth Brahmbhatt (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15000865#comment-15000865
 ] 

Parth Brahmbhatt commented on KAFKA-2808:
-

Ideally we would always give admin access to a topic to the creator of the 
topic, problem is kafka does not have the concept of topic owner. I filed a 
jira to introduce the concept of topic owner 
https://issues.apache.org/jira/browse/KAFKA-2145 which should set the ground to 
support your request. However the PR is kind of stuck in review for more than 3 
months now so I don't expect this request to be fulfilled in the upcoming 
release.

> Support auto.create.topics.enable with automatic WRITE permissions for 
> creator 
> ---
>
> Key: KAFKA-2808
> URL: https://issues.apache.org/jira/browse/KAFKA-2808
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Thomas Graves
>
> we have a user that wants to use the topic auto create functionality and 
> automatically have it give WRITE permissions so that they don't have to 
> explicitly create and grant acls ahead of time or make explicit call. 
> it seems like if you have auto.create.topics.enable enabled and the user has 
> CREATE acls we could automatically just give WRITE acls to the user who 
> creates the topic. Without that the auto create topics with acls doesn't add 
> much benefit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2808) Support auto.create.topics.enable with automatic WRITE permissions for creator

2015-11-11 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15000874#comment-15000874
 ] 

Thomas Graves commented on KAFKA-2808:
--

I haven't looked a the pull request but I'm not sure that really solves this 
problem because I again would have to explicitly set it.  

I guess perhaps namespaces can help solve this.  Or if we could do acls on 
wildcard topics.
In this particular use case at least the topics will all have the same prefix 
but the suffix part is unknown and can change.

What do you see as the problem with automatically giving WRITE permissions to 
the creator?  If you trust them enough to CREATE the topic would it generally 
mean they should be allowed to write also.

> Support auto.create.topics.enable with automatic WRITE permissions for 
> creator 
> ---
>
> Key: KAFKA-2808
> URL: https://issues.apache.org/jira/browse/KAFKA-2808
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Thomas Graves
>
> we have a user that wants to use the topic auto create functionality and 
> automatically have it give WRITE permissions so that they don't have to 
> explicitly create and grant acls ahead of time or make explicit call. 
> it seems like if you have auto.create.topics.enable enabled and the user has 
> CREATE acls we could automatically just give WRITE acls to the user who 
> creates the topic. Without that the auto create topics with acls doesn't add 
> much benefit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-2808) Support auto.create.topics.enable with automatic WRITE permissions for creator

2015-11-11 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15000874#comment-15000874
 ] 

Thomas Graves edited comment on KAFKA-2808 at 11/11/15 6:44 PM:


I haven't looked a the pull request but I'm not sure that really solves this 
problem because I again would have to explicitly set it.  

I guess perhaps namespaces can help solve this.  Or if we could do acls on 
wildcard topics.
In this particular use case at least the topics will all have the same prefix 
but the suffix part is unknown and can change.

What do you see as the problem with automatically giving WRITE permissions to 
the creator?  If you trust them enough to CREATE the topic wouldn't it 
generally mean they should be allowed to write also.


was (Author: tgraves):
I haven't looked a the pull request but I'm not sure that really solves this 
problem because I again would have to explicitly set it.  

I guess perhaps namespaces can help solve this.  Or if we could do acls on 
wildcard topics.
In this particular use case at least the topics will all have the same prefix 
but the suffix part is unknown and can change.

What do you see as the problem with automatically giving WRITE permissions to 
the creator?  If you trust them enough to CREATE the topic would it generally 
mean they should be allowed to write also.

> Support auto.create.topics.enable with automatic WRITE permissions for 
> creator 
> ---
>
> Key: KAFKA-2808
> URL: https://issues.apache.org/jira/browse/KAFKA-2808
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Thomas Graves
>
> we have a user that wants to use the topic auto create functionality and 
> automatically have it give WRITE permissions so that they don't have to 
> explicitly create and grant acls ahead of time or make explicit call. 
> it seems like if you have auto.create.topics.enable enabled and the user has 
> CREATE acls we could automatically just give WRITE acls to the user who 
> creates the topic. Without that the auto create topics with acls doesn't add 
> much benefit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2808) Support auto.create.topics.enable with automatic WRITE permissions for creator

2015-11-11 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15000876#comment-15000876
 ] 

Thomas Graves commented on KAFKA-2808:
--

I'm sure there are cases you wouldn't want this so it could be a config.

> Support auto.create.topics.enable with automatic WRITE permissions for 
> creator 
> ---
>
> Key: KAFKA-2808
> URL: https://issues.apache.org/jira/browse/KAFKA-2808
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Thomas Graves
>
> we have a user that wants to use the topic auto create functionality and 
> automatically have it give WRITE permissions so that they don't have to 
> explicitly create and grant acls ahead of time or make explicit call. 
> it seems like if you have auto.create.topics.enable enabled and the user has 
> CREATE acls we could automatically just give WRITE acls to the user who 
> creates the topic. Without that the auto create topics with acls doesn't add 
> much benefit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2808) Support auto.create.topics.enable with automatic WRITE permissions for creator

2015-11-11 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15000880#comment-15000880
 ] 

Thomas Graves commented on KAFKA-2808:
--

Sorry just realized perhaps what you were thinking.  You were saying if you had 
topic owner and auto create was on it would automatically set the owner to the 
creator and thus would also have WRITE permissions.

> Support auto.create.topics.enable with automatic WRITE permissions for 
> creator 
> ---
>
> Key: KAFKA-2808
> URL: https://issues.apache.org/jira/browse/KAFKA-2808
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Thomas Graves
>
> we have a user that wants to use the topic auto create functionality and 
> automatically have it give WRITE permissions so that they don't have to 
> explicitly create and grant acls ahead of time or make explicit call. 
> it seems like if you have auto.create.topics.enable enabled and the user has 
> CREATE acls we could automatically just give WRITE acls to the user who 
> creates the topic. Without that the auto create topics with acls doesn't add 
> much benefit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2790: doc improvements

2015-11-11 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/491


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-2790) Kafka 0.9.0 doc improvement

2015-11-11 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira resolved KAFKA-2790.
-
Resolution: Fixed

Issue resolved by pull request 491
[https://github.com/apache/kafka/pull/491]

> Kafka 0.9.0 doc improvement
> ---
>
> Key: KAFKA-2790
> URL: https://issues.apache.org/jira/browse/KAFKA-2790
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Gwen Shapira
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Observed a few issues after uploading the 0.9.0 docs to the Apache site 
> (http://kafka.apache.org/090/documentation.html).
> 1. There are a few places still mentioning 0.8.2.
> docs/api.html:We are in the process of rewritting the JVM clients for Kafka. 
> As of 0.8.2 Kafka includes a newly rewritten Java producer. The next release 
> will include an equivalent Java consumer. These new clients are meant to 
> supplant the existing Scala clients, but for compatability they will co-exist 
> for some time. These clients are available in a seperate jar with minimal 
> dependencies, while the old Scala clients remain packaged with the server.
> docs/api.html:As of the 0.8.2 release we encourage all new development to use 
> the new Java producer. This client is production tested and generally both 
> faster and more fully featured than the previous Scala client. You can use 
> this client by adding a dependency on the client jar using the following 
> example maven co-ordinates (you can change the version numbers with new 
> releases):
> docs/api.html:0.8.2.0
> docs/ops.html:The partition reassignment tool does not have the ability to 
> automatically generate a reassignment plan for decommissioning brokers yet. 
> As such, the admin has to come up with a reassignment plan to move the 
> replica for all partitions hosted on the broker to be decommissioned, to the 
> rest of the brokers. This can be relatively tedious as the reassignment needs 
> to ensure that all the replicas are not moved from the decommissioned broker 
> to only one other broker. To make this process effortless, we plan to add 
> tooling support for decommissioning brokers in 0.8.2.
> docs/quickstart.html: href="https://www.apache.org/dyn/closer.cgi?path=/kafka/0.8.2.0/kafka_2.10-0.8.2.0.tgz";
>  title="Kafka downloads">Download the 0.8.2.0 release and un-tar it.
> docs/quickstart.html:> tar -xzf kafka_2.10-0.8.2.0.tgz
> docs/quickstart.html:> cd kafka_2.10-0.8.2.0
> 2. The generated config tables (broker, producer and consumer) don't have the 
> proper table frames.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2790) Kafka 0.9.0 doc improvement

2015-11-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15000888#comment-15000888
 ] 

ASF GitHub Bot commented on KAFKA-2790:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/491


> Kafka 0.9.0 doc improvement
> ---
>
> Key: KAFKA-2790
> URL: https://issues.apache.org/jira/browse/KAFKA-2790
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Gwen Shapira
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Observed a few issues after uploading the 0.9.0 docs to the Apache site 
> (http://kafka.apache.org/090/documentation.html).
> 1. There are a few places still mentioning 0.8.2.
> docs/api.html:We are in the process of rewritting the JVM clients for Kafka. 
> As of 0.8.2 Kafka includes a newly rewritten Java producer. The next release 
> will include an equivalent Java consumer. These new clients are meant to 
> supplant the existing Scala clients, but for compatability they will co-exist 
> for some time. These clients are available in a seperate jar with minimal 
> dependencies, while the old Scala clients remain packaged with the server.
> docs/api.html:As of the 0.8.2 release we encourage all new development to use 
> the new Java producer. This client is production tested and generally both 
> faster and more fully featured than the previous Scala client. You can use 
> this client by adding a dependency on the client jar using the following 
> example maven co-ordinates (you can change the version numbers with new 
> releases):
> docs/api.html:0.8.2.0
> docs/ops.html:The partition reassignment tool does not have the ability to 
> automatically generate a reassignment plan for decommissioning brokers yet. 
> As such, the admin has to come up with a reassignment plan to move the 
> replica for all partitions hosted on the broker to be decommissioned, to the 
> rest of the brokers. This can be relatively tedious as the reassignment needs 
> to ensure that all the replicas are not moved from the decommissioned broker 
> to only one other broker. To make this process effortless, we plan to add 
> tooling support for decommissioning brokers in 0.8.2.
> docs/quickstart.html: href="https://www.apache.org/dyn/closer.cgi?path=/kafka/0.8.2.0/kafka_2.10-0.8.2.0.tgz";
>  title="Kafka downloads">Download the 0.8.2.0 release and un-tar it.
> docs/quickstart.html:> tar -xzf kafka_2.10-0.8.2.0.tgz
> docs/quickstart.html:> cd kafka_2.10-0.8.2.0
> 2. The generated config tables (broker, producer and consumer) don't have the 
> proper table frames.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2805) RecordAccumulator request timeout not enforced when all brokers are gone

2015-11-11 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15000891#comment-15000891
 ] 

Jun Rao commented on KAFKA-2805:


[~mgharat], since this blocks the 0.9.0.0 release, do you think you can look at 
this in the next few hours?

> RecordAccumulator request timeout not enforced when all brokers are gone
> 
>
> Key: KAFKA-2805
> URL: https://issues.apache.org/jira/browse/KAFKA-2805
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Mayuresh Gharat
> Fix For: 0.9.0.0
>
>
> When no brokers are left in the cluster, the producer seems not to enforce 
> the request timeout as expected.
> From the user mailing list, the null check in batch expiration in 
> RecordAccumulator seems questionable: 
> https://github.com/apache/kafka/blob/ae5a5d7c08bb634576a414f6f2864c5b8a7e58a3/clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java#L220.
>  
> If this is correct behavior, it is probably worthwhile clarifying the purpose 
> of the check in a comment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2805) RecordAccumulator request timeout not enforced when all brokers are gone

2015-11-11 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2805:
---
Priority: Blocker  (was: Major)

> RecordAccumulator request timeout not enforced when all brokers are gone
> 
>
> Key: KAFKA-2805
> URL: https://issues.apache.org/jira/browse/KAFKA-2805
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Mayuresh Gharat
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> When no brokers are left in the cluster, the producer seems not to enforce 
> the request timeout as expected.
> From the user mailing list, the null check in batch expiration in 
> RecordAccumulator seems questionable: 
> https://github.com/apache/kafka/blob/ae5a5d7c08bb634576a414f6f2864c5b8a7e58a3/clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java#L220.
>  
> If this is correct behavior, it is probably worthwhile clarifying the purpose 
> of the check in a comment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2807) Movement of throughput throttler to common broke upgrade tests

2015-11-11 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15000934#comment-15000934
 ] 

Ewen Cheslack-Postava commented on KAFKA-2807:
--

[~geoffra] Hmm, I moved it into common because that seemed the logical place 
given it was now to be used by both tools and copycat-tools classes. I think we 
could probably resolve this by moving it back to tools and have copycat-tools 
depend on tools. It's a little awkward, but I don't think would cause any 
problems for copycat-tools.

> Movement of throughput throttler to common broke upgrade tests
> --
>
> Key: KAFKA-2807
> URL: https://issues.apache.org/jira/browse/KAFKA-2807
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
> Fix For: 0.9.0.0
>
>
> In order to run compatibility tests with an 0.8.2 producer, and using 
> VerifiableProducer, we use the 0.8.2 kafka-run-tools.sh classpath augmented 
> by the 0.9.0 tools and tools dependencies classpaths.
> Recently, some refactoring efforts moved ThroughputThrottler to 
> org.apache.kafka.common.utils package, but this breaks the existing 
> compatibility tests:
> {code}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/kafka/common/utils/ThroughputThrottler
> at 
> org.apache.kafka.tools.VerifiableProducer.main(VerifiableProducer.java:334)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.kafka.common.utils.ThroughputThrottler
> at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
> ... 1 more
> {code}
> Given the need to be able to run VerifiableProducer against 0.8.X, I'm not 
> sure VerifiableProducer can depend on org.apache.kafka.common.utils at this 
> point in time. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2809) Improve documentation linking

2015-11-11 Thread Grant Henke (JIRA)
Grant Henke created KAFKA-2809:
--

 Summary: Improve documentation linking
 Key: KAFKA-2809
 URL: https://issues.apache.org/jira/browse/KAFKA-2809
 Project: Kafka
  Issue Type: Improvement
  Components: website
Affects Versions: 0.8.2.2
Reporter: Grant Henke
Assignee: Grant Henke
 Fix For: 0.9.0.0


Often it is useful to link to a specific header within the documentation. 
Especially when referencing docs in the mailing lists. 

This Jira is to add anchors and links for all headers in the docs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2807) Movement of throughput throttler to common broke upgrade tests

2015-11-11 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-2807:
-
Priority: Blocker  (was: Major)

> Movement of throughput throttler to common broke upgrade tests
> --
>
> Key: KAFKA-2807
> URL: https://issues.apache.org/jira/browse/KAFKA-2807
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> In order to run compatibility tests with an 0.8.2 producer, and using 
> VerifiableProducer, we use the 0.8.2 kafka-run-tools.sh classpath augmented 
> by the 0.9.0 tools and tools dependencies classpaths.
> Recently, some refactoring efforts moved ThroughputThrottler to 
> org.apache.kafka.common.utils package, but this breaks the existing 
> compatibility tests:
> {code}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/kafka/common/utils/ThroughputThrottler
> at 
> org.apache.kafka.tools.VerifiableProducer.main(VerifiableProducer.java:334)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.kafka.common.utils.ThroughputThrottler
> at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
> ... 1 more
> {code}
> Given the need to be able to run VerifiableProducer against 0.8.X, I'm not 
> sure VerifiableProducer can depend on org.apache.kafka.common.utils at this 
> point in time. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2807) Movement of throughput throttler to common broke upgrade tests

2015-11-11 Thread Neha Narkhede (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15000969#comment-15000969
 ] 

Neha Narkhede commented on KAFKA-2807:
--

We don't want our tests to fail, so marking this as a blocker.

> Movement of throughput throttler to common broke upgrade tests
> --
>
> Key: KAFKA-2807
> URL: https://issues.apache.org/jira/browse/KAFKA-2807
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> In order to run compatibility tests with an 0.8.2 producer, and using 
> VerifiableProducer, we use the 0.8.2 kafka-run-tools.sh classpath augmented 
> by the 0.9.0 tools and tools dependencies classpaths.
> Recently, some refactoring efforts moved ThroughputThrottler to 
> org.apache.kafka.common.utils package, but this breaks the existing 
> compatibility tests:
> {code}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/kafka/common/utils/ThroughputThrottler
> at 
> org.apache.kafka.tools.VerifiableProducer.main(VerifiableProducer.java:334)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.kafka.common.utils.ThroughputThrottler
> at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
> ... 1 more
> {code}
> Given the need to be able to run VerifiableProducer against 0.8.X, I'm not 
> sure VerifiableProducer can depend on org.apache.kafka.common.utils at this 
> point in time. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #133

2015-11-11 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-2790: doc improvements

--
[...truncated 3654 lines...]

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.log.LogManagerTest > testCleanupSegmentsToMaintainSize PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithRelativeDirectory 
PASSED

kafka.log.LogManagerTest > testGetNonExistentLog PASSED

kafka.log.LogManagerTest > testTwoLogManagersUsingSameDirFails PASSED

kafka.log.LogManagerTest > testLeastLoadedAssignment PASSED

kafka.log.LogManagerTest > testCleanupExpiredSegments PASSED

kafka.log.LogManagerTest > testCheckpointRecoveryPoints PASSED

kafka.log.LogManagerTest > testTimeBasedFlush PASSED

kafka.log.LogManagerTest > testCreateLog PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.FileMessageSetTest > testTruncate PASSED

kafka.log.FileMessageSetTest > testIterationOverPartialAndTruncation PASSED

kafka.log.FileMessageSetTest > testRead PASSED

kafka.log.FileMessageSetTest > testFileSize PASSED

kafka.log.FileMessageSetTest > testIteratorWithLimits PASSED

kafka.log.FileMessageSetTest > testPreallocateTrue PASSED

kafka.log.FileMessageSetTest > testIteratorIsConsistent PASSED

kafka.log.FileMessageSetTest > testIterationDoesntChangePosition PASSED

kafka.log.FileMessageSetTest > testWrittenEqualsRead PASSED

kafka.log.FileMessageSetTest > testWriteTo PASSED

kafka.log.FileMessageSetTest > testPreallocateFalse PASSED

kafka.log.FileMessageSetTest > testPreallocateClearShutdown PASSED

kafka.log.FileMessageSetTest > testSearch PASSED

kafka.log.FileMessageSetTest > testSizeInBytes PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingTopic PASSED

kafka.log.LogTest > testIndexRebuild PASSED

kafka.log.LogTest > testLogRolls PASSED

kafka.log.LogTest > testMessageSizeCheck PASSED

kafka.log.LogTest > testAsyncDelete PASSED

kafka.log.LogTest > testReadOutOfRange PASSED

kafka.log.LogTest > testReadAtLogGap PASSED

kafka.log.LogTest > testTimeBasedLogRoll PASSED

kafka.log.LogTest > testLoadEmptyLog PASSED

kafka.log.LogTest > testMessageSetSizeCheck PASSED

kafka.log.LogTest > testIndexResizingAtTruncation PASSED

kafka.log.LogTest > testCompactedTopicConstraints PASSED

kafka.log.LogTest > testThatGarbageCollectingSegmentsDoesntChangeOffset PASSED

kafka.log.LogTest > testAppendAndReadWithSequentialOffsets PASSED

kafka.log.LogTest > testParseT

[jira] [Created] (KAFKA-2810) Build a small subset of system tests on PR

2015-11-11 Thread Geoff Anderson (JIRA)
Geoff Anderson created KAFKA-2810:
-

 Summary: Build a small subset of system tests on PR
 Key: KAFKA-2810
 URL: https://issues.apache.org/jira/browse/KAFKA-2810
 Project: Kafka
  Issue Type: Wish
Reporter: Geoff Anderson


It would be useful to run some small subset of (short) system tests on each PR.

I think a reasonable approach might be to run the set of tests which 
specifically test functionality of service classes (currently these are called 
"sanity checks"), so developers would at least get immediate feedback on 
whether a change broke a service class in an obvious way. 




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2808) Support auto.create.topics.enable with automatic WRITE permissions for creator

2015-11-11 Thread Parth Brahmbhatt (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15000985#comment-15000985
 ] 

Parth Brahmbhatt commented on KAFKA-2808:
-

[~tgraves] your last comment is what I had in mind. Basically any time a topic 
is created , using CLI, AdminUtils or through auto create, in secure mode we 
should be able to derive the identity of the user who is creating the topic 
(from JAAS Login or if creation is  through auto create using the caller's 
session on server side) and assign him as the owner.  

Namespace can solve the problem and I believe 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-37+-+Add+Namespaces+to+Kafka
 is addressing it.
I am assuming by WildCardTopics you mean something that supports regex, which 
wont be that different from namespacing it self.

> Support auto.create.topics.enable with automatic WRITE permissions for 
> creator 
> ---
>
> Key: KAFKA-2808
> URL: https://issues.apache.org/jira/browse/KAFKA-2808
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Thomas Graves
>
> we have a user that wants to use the topic auto create functionality and 
> automatically have it give WRITE permissions so that they don't have to 
> explicitly create and grant acls ahead of time or make explicit call. 
> it seems like if you have auto.create.topics.enable enabled and the user has 
> CREATE acls we could automatically just give WRITE acls to the user who 
> creates the topic. Without that the auto create topics with acls doesn't add 
> much benefit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2809: Improve documentation linking

2015-11-11 Thread granthenke
GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/498

KAFKA-2809: Improve documentation linking

Often it is useful to link to a specific header within the documentation. 
Especially when referencing docs in the mailing lists.

This adds anchors and links for all headers in the docs.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka doc-links

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/498.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #498


commit 3c7f762ec4028906071dfad540311e815bda8355
Author: Grant Henke 
Date:   2015-11-11T19:41:56Z

KAFKA-2809: Improve documentation linking




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2809) Improve documentation linking

2015-11-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15000990#comment-15000990
 ] 

ASF GitHub Bot commented on KAFKA-2809:
---

GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/498

KAFKA-2809: Improve documentation linking

Often it is useful to link to a specific header within the documentation. 
Especially when referencing docs in the mailing lists.

This adds anchors and links for all headers in the docs.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka doc-links

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/498.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #498


commit 3c7f762ec4028906071dfad540311e815bda8355
Author: Grant Henke 
Date:   2015-11-11T19:41:56Z

KAFKA-2809: Improve documentation linking




> Improve documentation linking
> -
>
> Key: KAFKA-2809
> URL: https://issues.apache.org/jira/browse/KAFKA-2809
> Project: Kafka
>  Issue Type: Improvement
>  Components: website
>Affects Versions: 0.8.2.2
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.9.0.0
>
>
> Often it is useful to link to a specific header within the documentation. 
> Especially when referencing docs in the mailing lists. 
> This Jira is to add anchors and links for all headers in the docs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2809) Improve documentation linking

2015-11-11 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-2809:
---
Status: Patch Available  (was: Open)

> Improve documentation linking
> -
>
> Key: KAFKA-2809
> URL: https://issues.apache.org/jira/browse/KAFKA-2809
> Project: Kafka
>  Issue Type: Improvement
>  Components: website
>Affects Versions: 0.8.2.2
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.9.0.0
>
>
> Often it is useful to link to a specific header within the documentation. 
> Especially when referencing docs in the mailing lists. 
> This Jira is to add anchors and links for all headers in the docs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2805) RecordAccumulator request timeout not enforced when all brokers are gone

2015-11-11 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson reassigned KAFKA-2805:
--

Assignee: Jason Gustafson  (was: Mayuresh Gharat)

> RecordAccumulator request timeout not enforced when all brokers are gone
> 
>
> Key: KAFKA-2805
> URL: https://issues.apache.org/jira/browse/KAFKA-2805
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> When no brokers are left in the cluster, the producer seems not to enforce 
> the request timeout as expected.
> From the user mailing list, the null check in batch expiration in 
> RecordAccumulator seems questionable: 
> https://github.com/apache/kafka/blob/ae5a5d7c08bb634576a414f6f2864c5b8a7e58a3/clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java#L220.
>  
> If this is correct behavior, it is probably worthwhile clarifying the purpose 
> of the check in a comment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2805) RecordAccumulator request timeout not enforced when all brokers are gone

2015-11-11 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15001004#comment-15001004
 ] 

Jason Gustafson commented on KAFKA-2805:


Went ahead and assigned to myself to try and unblock the release. [~mgharat] If 
you're already working on it, feel free to assign it back to you.

> RecordAccumulator request timeout not enforced when all brokers are gone
> 
>
> Key: KAFKA-2805
> URL: https://issues.apache.org/jira/browse/KAFKA-2805
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> When no brokers are left in the cluster, the producer seems not to enforce 
> the request timeout as expected.
> From the user mailing list, the null check in batch expiration in 
> RecordAccumulator seems questionable: 
> https://github.com/apache/kafka/blob/ae5a5d7c08bb634576a414f6f2864c5b8a7e58a3/clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java#L220.
>  
> If this is correct behavior, it is probably worthwhile clarifying the purpose 
> of the check in a comment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2805) RecordAccumulator request timeout not enforced when all brokers are gone

2015-11-11 Thread Mayuresh Gharat (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15001024#comment-15001024
 ] 

Mayuresh Gharat commented on KAFKA-2805:


Hi Jason,

I was debugging to find exactly what is happening. Is it ok if I can get back 
with a patch ( if necessary ) by EOD.
If I am not able to finish this today, you can go ahead with this jira. Is it 
ok?

Thanks,

Mayuresh

> RecordAccumulator request timeout not enforced when all brokers are gone
> 
>
> Key: KAFKA-2805
> URL: https://issues.apache.org/jira/browse/KAFKA-2805
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> When no brokers are left in the cluster, the producer seems not to enforce 
> the request timeout as expected.
> From the user mailing list, the null check in batch expiration in 
> RecordAccumulator seems questionable: 
> https://github.com/apache/kafka/blob/ae5a5d7c08bb634576a414f6f2864c5b8a7e58a3/clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java#L220.
>  
> If this is correct behavior, it is probably worthwhile clarifying the purpose 
> of the check in a comment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2805) RecordAccumulator request timeout not enforced when all brokers are gone

2015-11-11 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15001040#comment-15001040
 ] 

Jason Gustafson commented on KAFKA-2805:


Works for me. I'll assign back to you.

> RecordAccumulator request timeout not enforced when all brokers are gone
> 
>
> Key: KAFKA-2805
> URL: https://issues.apache.org/jira/browse/KAFKA-2805
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> When no brokers are left in the cluster, the producer seems not to enforce 
> the request timeout as expected.
> From the user mailing list, the null check in batch expiration in 
> RecordAccumulator seems questionable: 
> https://github.com/apache/kafka/blob/ae5a5d7c08bb634576a414f6f2864c5b8a7e58a3/clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java#L220.
>  
> If this is correct behavior, it is probably worthwhile clarifying the purpose 
> of the check in a comment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2805) RecordAccumulator request timeout not enforced when all brokers are gone

2015-11-11 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-2805:
---
Assignee: Mayuresh Gharat  (was: Jason Gustafson)

> RecordAccumulator request timeout not enforced when all brokers are gone
> 
>
> Key: KAFKA-2805
> URL: https://issues.apache.org/jira/browse/KAFKA-2805
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Mayuresh Gharat
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> When no brokers are left in the cluster, the producer seems not to enforce 
> the request timeout as expected.
> From the user mailing list, the null check in batch expiration in 
> RecordAccumulator seems questionable: 
> https://github.com/apache/kafka/blob/ae5a5d7c08bb634576a414f6f2864c5b8a7e58a3/clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java#L220.
>  
> If this is correct behavior, it is probably worthwhile clarifying the purpose 
> of the check in a comment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #803

2015-11-11 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-2790: doc improvements

--
[...truncated 1856 lines...]

kafka.admin.AddPartitionsTest > testTopicDoesNotExist PASSED

kafka.admin.AddPartitionsTest > testIncrementPartitions PASSED

kafka.admin.AddPartitionsTest > testManualAssignmentOfReplicas PASSED

kafka.admin.AddPartitionsTest > testReplicaPlacement PASSED

kafka.admin.ConfigCommandTest > testArgumentParse PASSED

kafka.admin.DeleteConsumerGroupTest > 
testGroupWideDeleteInZKDoesNothingForActiveConsumerGroup PASSED

kafka.admin.DeleteConsumerGroupTest > 
testGroupTopicWideDeleteInZKDoesNothingForActiveGroupConsumingMultipleTopics 
PASSED

kafka.admin.DeleteConsumerGroupTest > 
testConsumptionOnRecreatedTopicAfterTopicWideDeleteInZK PASSED

kafka.admin.DeleteConsumerGroupTest > testTopicWideDeleteInZK PASSED

kafka.admin.DeleteConsumerGroupTest > 
testGroupTopicWideDeleteInZKForGroupConsumingOneTopic PASSED

kafka.admin.DeleteConsumerGroupTest > 
testGroupTopicWideDeleteInZKForGroupConsumingMultipleTopics PASSED

kafka.admin.DeleteConsumerGroupTest > testGroupWideDeleteInZK PASSED

kafka.producer.SyncProducerTest > testReachableServer PASSED

kafka.producer.SyncProducerTest > testMessageSizeTooLarge PASSED

kafka.producer.SyncProducerTest > testNotEnoughReplicas PASSED

kafka.producer.SyncProducerTest > testMessageSizeTooLargeWithAckZero PASSED

kafka.producer.SyncProducerTest > testProducerCanTimeout PASSED

kafka.producer.SyncProducerTest > testProduceRequestWithNoResponse PASSED

kafka.producer.SyncProducerTest > testEmptyProduceRequest PASSED

kafka.producer.SyncProducerTest > testProduceCorrectlyReceivesResponse PASSED

kafka.producer.ProducerTest > testSendToNewTopic PASSED

kafka.producer.ProducerTest > testAsyncSendCanCorrectlyFailWithTimeout PASSED

kafka.producer.ProducerTest > testSendNullMessage PASSED

kafka.producer.ProducerTest > testUpdateBrokerPartitionInfo PASSED

kafka.producer.ProducerTest > testSendWithDeadBroker PASSED

kafka.producer.AsyncProducerTest > testFailedSendRetryLogic PASSED

kafka.producer.AsyncProducerTest > testQueueTimeExpired PASSED

kafka.producer.AsyncProducerTest > testPartitionAndCollateEvents PASSED

kafka.producer.AsyncProducerTest > testBatchSize PASSED

kafka.producer.AsyncProducerTest > testSerializeEvents PASSED

kafka.producer.AsyncProducerTest > testProducerQueueSize PASSED

kafka.producer.AsyncProducerTest > testRandomPartitioner PASSED

kafka.producer.AsyncProducerTest > testInvalidConfiguration PASSED

kafka.producer.AsyncProducerTest > testInvalidPartition PASSED

kafka.producer.AsyncProducerTest > testNoBroker PASSED

kafka.producer.AsyncProducerTest > testProduceAfterClosed PASSED

kafka.producer.AsyncProducerTest > testJavaProducer PASSED

kafka.producer.AsyncProducerTest > testIncompatibleEncoder PASSED

kafka.common.ZkNodeChangeNotificationListenerTest > testProcessNotification 
PASSED

kafka.common.TopicTest > testInvalidTopicNames PASSED

kafka.common.TopicTest > testTopicHasCollision PASSED

kafka.common.TopicTest > testTopicHasCollisionChars PASSED

kafka.common.ConfigTest > testInvalidGroupIds PASSED

kafka.common.ConfigTest > testInvalidClientIds PASSED

kafka.server.DelayedOperationTest > testRequestPurge PASSED

kafka.server.DelayedOperationTest > testRequestExpiry PASSED

kafka.server.DelayedOperationTest > testRequestSatisfaction PASSED

kafka.server.PlaintextReplicaFetchTest > testReplicaFetcherThread PASSED

kafka.server.ServerGenerateBrokerIdTest > testAutoGenerateBrokerId PASSED

kafka.server.ServerGenerateBrokerIdTest > testMultipleLogDirsMetaProps PASSED

kafka.server.ServerGenerateBrokerIdTest > testUserConfigAndGeneratedBrokerId 
PASSED

kafka.server.ServerGenerateBrokerIdTest > 
testConsistentBrokerIdFromUserConfigAndMetaProps PASSED

kafka.server.SslReplicaFetchTest > testReplicaFetcherThread PASSED

kafka.server.HighwatermarkPersistenceTest > 
testHighWatermarkPersistenceMultiplePartitions PASSED

kafka.server.HighwatermarkPersistenceTest > 
testHighWatermarkPersistenceSinglePartition PASSED

kafka.server.ThrottledResponseExpirationTest > testThrottledRequest PASSED

kafka.server.ThrottledResponseExpirationTest > testExpire PASSED

kafka.server.ReplicaManagerTest > testHighWaterMarkDirectoryMapping PASSED

kafka.server.ReplicaManagerTest > testIllegalRequiredAcks PASSED

kafka.server.ReplicaManagerTest > testHighwaterMarkRelativeDirectoryMapping 
PASSED

kafka.server.ServerShutdownTest > testCleanShutdownAfterFailedStartup PASSED

kafka.server.ServerShutdownTest > testConsecutiveShutdown PASSED

kafka.server.ServerShutdownTest > testCleanShutdown PASSED

kafka.server.ServerShutdownTest > testCleanShutdownWithDeleteTopicEnabled PASSED

kafka.server.LeaderElectionTest > testLeaderElectionWithStaleControllerEpoch 
PASSED

kafka.server.LeaderElectionTest > testLeaderElectionAndEpoch PASSED

kafka.server.OffsetCommitTest > testUpdateOffsets PASSED


Jenkins build is back to normal : kafka_0.9.0_jdk7 #8

2015-11-11 Thread Apache Jenkins Server
See 



[GitHub] kafka pull request: KAFKA-2807: Move ThroughputThrottler back to t...

2015-11-11 Thread ewencp
GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/499

KAFKA-2807: Move ThroughputThrottler back to tools jar to fix upgrade tests.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka 
kafka-2807-relocate-throughput-throttler

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/499.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #499


commit 410e876f2466c0fabd2c5e3a1ba9252375a35939
Author: Ewen Cheslack-Postava 
Date:   2015-11-11T20:14:49Z

KAFKA-2807: Move ThroughputThrottler back to tools jar to fix upgrade tests.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2807) Movement of throughput throttler to common broke upgrade tests

2015-11-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15001091#comment-15001091
 ] 

ASF GitHub Bot commented on KAFKA-2807:
---

GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/499

KAFKA-2807: Move ThroughputThrottler back to tools jar to fix upgrade tests.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka 
kafka-2807-relocate-throughput-throttler

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/499.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #499


commit 410e876f2466c0fabd2c5e3a1ba9252375a35939
Author: Ewen Cheslack-Postava 
Date:   2015-11-11T20:14:49Z

KAFKA-2807: Move ThroughputThrottler back to tools jar to fix upgrade tests.




> Movement of throughput throttler to common broke upgrade tests
> --
>
> Key: KAFKA-2807
> URL: https://issues.apache.org/jira/browse/KAFKA-2807
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> In order to run compatibility tests with an 0.8.2 producer, and using 
> VerifiableProducer, we use the 0.8.2 kafka-run-tools.sh classpath augmented 
> by the 0.9.0 tools and tools dependencies classpaths.
> Recently, some refactoring efforts moved ThroughputThrottler to 
> org.apache.kafka.common.utils package, but this breaks the existing 
> compatibility tests:
> {code}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/kafka/common/utils/ThroughputThrottler
> at 
> org.apache.kafka.tools.VerifiableProducer.main(VerifiableProducer.java:334)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.kafka.common.utils.ThroughputThrottler
> at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
> ... 1 more
> {code}
> Given the need to be able to run VerifiableProducer against 0.8.X, I'm not 
> sure VerifiableProducer can depend on org.apache.kafka.common.utils at this 
> point in time. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2807) Movement of throughput throttler to common broke upgrade tests

2015-11-11 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-2807:
-
 Assignee: Ewen Cheslack-Postava  (was: Geoff Anderson)
 Reviewer: Gwen Shapira
Fix Version/s: (was: 0.9.0.0)
   0.9.0.1
   Status: Patch Available  (was: Open)

Tested the upgrade test and a copycat test locally, we can rerun the full suite 
if necessary. This patch required a bit of renaming in the build file because 
Gradle does not like : in project names.

> Movement of throughput throttler to common broke upgrade tests
> --
>
> Key: KAFKA-2807
> URL: https://issues.apache.org/jira/browse/KAFKA-2807
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>Assignee: Ewen Cheslack-Postava
>Priority: Blocker
> Fix For: 0.9.0.1
>
>
> In order to run compatibility tests with an 0.8.2 producer, and using 
> VerifiableProducer, we use the 0.8.2 kafka-run-tools.sh classpath augmented 
> by the 0.9.0 tools and tools dependencies classpaths.
> Recently, some refactoring efforts moved ThroughputThrottler to 
> org.apache.kafka.common.utils package, but this breaks the existing 
> compatibility tests:
> {code}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/kafka/common/utils/ThroughputThrottler
> at 
> org.apache.kafka.tools.VerifiableProducer.main(VerifiableProducer.java:334)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.kafka.common.utils.ThroughputThrottler
> at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
> ... 1 more
> {code}
> Given the need to be able to run VerifiableProducer against 0.8.X, I'm not 
> sure VerifiableProducer can depend on org.apache.kafka.common.utils at this 
> point in time. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2763) Reduce stream task migrations and initialization costs

2015-11-11 Thread Yasuhiro Matsuda (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yasuhiro Matsuda reassigned KAFKA-2763:
---

Assignee: Yasuhiro Matsuda

> Reduce stream task migrations and initialization costs
> --
>
> Key: KAFKA-2763
> URL: https://issues.apache.org/jira/browse/KAFKA-2763
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Affects Versions: 0.9.0.0
>Reporter: Yasuhiro Matsuda
>Assignee: Yasuhiro Matsuda
>
> Stream task assignment is not aware of either the previous task assignment or 
> local states of participating clients. By making the assignment logic aware 
> of them, we can reduce task migrations and initialization cost.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2811) Add standby tasks

2015-11-11 Thread Yasuhiro Matsuda (JIRA)
Yasuhiro Matsuda created KAFKA-2811:
---

 Summary: Add standby tasks
 Key: KAFKA-2811
 URL: https://issues.apache.org/jira/browse/KAFKA-2811
 Project: Kafka
  Issue Type: Sub-task
  Components: kafka streams
Reporter: Yasuhiro Matsuda


Restoring local state from state change-log topics can be expensive. To 
alleviate this, we want to have an option to keep replications of local states 
that are kept up to date. The task assignment logic should be aware of 
existence of such replicas.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2811) Add standby tasks

2015-11-11 Thread Yasuhiro Matsuda (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yasuhiro Matsuda reassigned KAFKA-2811:
---

Assignee: Yasuhiro Matsuda

> Add standby tasks
> -
>
> Key: KAFKA-2811
> URL: https://issues.apache.org/jira/browse/KAFKA-2811
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Reporter: Yasuhiro Matsuda
>Assignee: Yasuhiro Matsuda
>
> Restoring local state from state change-log topics can be expensive. To 
> alleviate this, we want to have an option to keep replications of local 
> states that are kept up to date. The task assignment logic should be aware of 
> existence of such replicas.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2812) Enhance new consumer integration test coverage

2015-11-11 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-2812:
--

 Summary: Enhance new consumer integration test coverage
 Key: KAFKA-2812
 URL: https://issues.apache.org/jira/browse/KAFKA-2812
 Project: Kafka
  Issue Type: Test
Reporter: Jason Gustafson
Assignee: Jason Gustafson


There are still some test cases that we didn't get to in KAFKA-2274 (including 
hard broker and client failures) as well as additional validation that can be 
added to existing test cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2812: improve consumer integration tests

2015-11-11 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/500

KAFKA-2812: improve consumer integration tests



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-2812

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/500.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #500


commit ff9a2dfba81e15170204a21dec12307206e9bd61
Author: Jason Gustafson 
Date:   2015-11-10T23:24:04Z

KAFKA-2812: improve consumer integration tests




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2812) Enhance new consumer integration test coverage

2015-11-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15001337#comment-15001337
 ] 

ASF GitHub Bot commented on KAFKA-2812:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/500

KAFKA-2812: improve consumer integration tests



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-2812

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/500.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #500


commit ff9a2dfba81e15170204a21dec12307206e9bd61
Author: Jason Gustafson 
Date:   2015-11-10T23:24:04Z

KAFKA-2812: improve consumer integration tests




> Enhance new consumer integration test coverage
> --
>
> Key: KAFKA-2812
> URL: https://issues.apache.org/jira/browse/KAFKA-2812
> Project: Kafka
>  Issue Type: Test
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>
> There are still some test cases that we didn't get to in KAFKA-2274 
> (including hard broker and client failures) as well as additional validation 
> that can be added to existing test cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2813) selector doesn't close socket connection on non-IOExceptions

2015-11-11 Thread Jun Rao (JIRA)
Jun Rao created KAFKA-2813:
--

 Summary: selector doesn't close socket connection on 
non-IOExceptions
 Key: KAFKA-2813
 URL: https://issues.apache.org/jira/browse/KAFKA-2813
 Project: Kafka
  Issue Type: Bug
  Components: core
Reporter: Jun Rao
Assignee: Jun Rao
Priority: Blocker
 Fix For: 0.9.0.0


When running a system test, we saw lots of entries like the following. The 
issue is that when the current leader switches to the follower, we will 
truncate the log in the follower. It's possible there is a concurrent fetch 
request being served at this moment. If this happens, we throw a KafkaException 
when trying to send the fetch response (in FileMessageSet). The exception will 
propagate through Selector.poll(). Selector catches IOException and closes the 
corresponding socket. However, KafkaException is not an IOException. Since the 
socket is not closed, Selector.poll() will keep accessing the socket and keep 
getting the same error.

[2015-11-11 07:25:01,150] ERROR Processor got uncaught exception. 
(kafka.network.Processor)
kafka.common.KafkaException: Size of FileMessageSet 
/mnt/kafka-data-logs/test_topic-0/.log has been truncated 
during write: old size 16368, new size 0
at kafka.log.FileMessageSet.writeTo(FileMessageSet.scala:158)
at kafka.api.PartitionDataSend.writeTo(FetchResponse.scala:77)
at org.apache.kafka.common.network.MultiSend.writeTo(MultiSend.java:81)
at kafka.api.TopicDataSend.writeTo(FetchResponse.scala:148)
at org.apache.kafka.common.network.MultiSend.writeTo(MultiSend.java:81)
at kafka.api.FetchResponseSend.writeTo(FetchResponse.scala:291)
at 
org.apache.kafka.common.network.KafkaChannel.send(KafkaChannel.java:165)
at 
org.apache.kafka.common.network.KafkaChannel.write(KafkaChannel.java:152)
at org.apache.kafka.common.network.Selector.poll(Selector.java:301)
at kafka.network.Processor.run(SocketServer.scala:413)
at java.lang.Thread.run(Thread.java:745)




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2807) Movement of throughput throttler to common broke upgrade tests

2015-11-11 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2807:

Fix Version/s: (was: 0.9.0.1)
   0.9.1.0

> Movement of throughput throttler to common broke upgrade tests
> --
>
> Key: KAFKA-2807
> URL: https://issues.apache.org/jira/browse/KAFKA-2807
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>Assignee: Ewen Cheslack-Postava
>Priority: Blocker
> Fix For: 0.9.1.0
>
>
> In order to run compatibility tests with an 0.8.2 producer, and using 
> VerifiableProducer, we use the 0.8.2 kafka-run-tools.sh classpath augmented 
> by the 0.9.0 tools and tools dependencies classpaths.
> Recently, some refactoring efforts moved ThroughputThrottler to 
> org.apache.kafka.common.utils package, but this breaks the existing 
> compatibility tests:
> {code}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/kafka/common/utils/ThroughputThrottler
> at 
> org.apache.kafka.tools.VerifiableProducer.main(VerifiableProducer.java:334)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.kafka.common.utils.ThroughputThrottler
> at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
> ... 1 more
> {code}
> Given the need to be able to run VerifiableProducer against 0.8.X, I'm not 
> sure VerifiableProducer can depend on org.apache.kafka.common.utils at this 
> point in time. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2807) Movement of throughput throttler to common broke upgrade tests

2015-11-11 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2807:

Fix Version/s: (was: 0.9.0.0)
   0.9.1.0

> Movement of throughput throttler to common broke upgrade tests
> --
>
> Key: KAFKA-2807
> URL: https://issues.apache.org/jira/browse/KAFKA-2807
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>Assignee: Ewen Cheslack-Postava
>Priority: Blocker
> Fix For: 0.9.1.0
>
>
> In order to run compatibility tests with an 0.8.2 producer, and using 
> VerifiableProducer, we use the 0.8.2 kafka-run-tools.sh classpath augmented 
> by the 0.9.0 tools and tools dependencies classpaths.
> Recently, some refactoring efforts moved ThroughputThrottler to 
> org.apache.kafka.common.utils package, but this breaks the existing 
> compatibility tests:
> {code}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/kafka/common/utils/ThroughputThrottler
> at 
> org.apache.kafka.tools.VerifiableProducer.main(VerifiableProducer.java:334)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.kafka.common.utils.ThroughputThrottler
> at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
> ... 1 more
> {code}
> Given the need to be able to run VerifiableProducer against 0.8.X, I'm not 
> sure VerifiableProducer can depend on org.apache.kafka.common.utils at this 
> point in time. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2807) Movement of throughput throttler to common broke upgrade tests

2015-11-11 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2807:

   Resolution: Fixed
Fix Version/s: (was: 0.9.1.0)
   0.9.0.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 499
[https://github.com/apache/kafka/pull/499]

> Movement of throughput throttler to common broke upgrade tests
> --
>
> Key: KAFKA-2807
> URL: https://issues.apache.org/jira/browse/KAFKA-2807
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>Assignee: Ewen Cheslack-Postava
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> In order to run compatibility tests with an 0.8.2 producer, and using 
> VerifiableProducer, we use the 0.8.2 kafka-run-tools.sh classpath augmented 
> by the 0.9.0 tools and tools dependencies classpaths.
> Recently, some refactoring efforts moved ThroughputThrottler to 
> org.apache.kafka.common.utils package, but this breaks the existing 
> compatibility tests:
> {code}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/kafka/common/utils/ThroughputThrottler
> at 
> org.apache.kafka.tools.VerifiableProducer.main(VerifiableProducer.java:334)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.kafka.common.utils.ThroughputThrottler
> at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
> ... 1 more
> {code}
> Given the need to be able to run VerifiableProducer against 0.8.X, I'm not 
> sure VerifiableProducer can depend on org.apache.kafka.common.utils at this 
> point in time. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2807: Move ThroughputThrottler back to t...

2015-11-11 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/499


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2807) Movement of throughput throttler to common broke upgrade tests

2015-11-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15001357#comment-15001357
 ] 

ASF GitHub Bot commented on KAFKA-2807:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/499


> Movement of throughput throttler to common broke upgrade tests
> --
>
> Key: KAFKA-2807
> URL: https://issues.apache.org/jira/browse/KAFKA-2807
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>Assignee: Ewen Cheslack-Postava
>Priority: Blocker
> Fix For: 0.9.1.0
>
>
> In order to run compatibility tests with an 0.8.2 producer, and using 
> VerifiableProducer, we use the 0.8.2 kafka-run-tools.sh classpath augmented 
> by the 0.9.0 tools and tools dependencies classpaths.
> Recently, some refactoring efforts moved ThroughputThrottler to 
> org.apache.kafka.common.utils package, but this breaks the existing 
> compatibility tests:
> {code}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/kafka/common/utils/ThroughputThrottler
> at 
> org.apache.kafka.tools.VerifiableProducer.main(VerifiableProducer.java:334)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.kafka.common.utils.ThroughputThrottler
> at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
> ... 1 more
> {code}
> Given the need to be able to run VerifiableProducer against 0.8.X, I'm not 
> sure VerifiableProducer can depend on org.apache.kafka.common.utils at this 
> point in time. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2763: better stream task assignment

2015-11-11 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/497


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-2763) Reduce stream task migrations and initialization costs

2015-11-11 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-2763.
--
   Resolution: Fixed
Fix Version/s: 0.9.1.0

Issue resolved by pull request 497
[https://github.com/apache/kafka/pull/497]

> Reduce stream task migrations and initialization costs
> --
>
> Key: KAFKA-2763
> URL: https://issues.apache.org/jira/browse/KAFKA-2763
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Affects Versions: 0.9.0.0
>Reporter: Yasuhiro Matsuda
>Assignee: Yasuhiro Matsuda
> Fix For: 0.9.1.0
>
>
> Stream task assignment is not aware of either the previous task assignment or 
> local states of participating clients. By making the assignment logic aware 
> of them, we can reduce task migrations and initialization cost.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2763) Reduce stream task migrations and initialization costs

2015-11-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15001375#comment-15001375
 ] 

ASF GitHub Bot commented on KAFKA-2763:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/497


> Reduce stream task migrations and initialization costs
> --
>
> Key: KAFKA-2763
> URL: https://issues.apache.org/jira/browse/KAFKA-2763
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Affects Versions: 0.9.0.0
>Reporter: Yasuhiro Matsuda
>Assignee: Yasuhiro Matsuda
> Fix For: 0.9.1.0
>
>
> Stream task assignment is not aware of either the previous task assignment or 
> local states of participating clients. By making the assignment logic aware 
> of them, we can reduce task migrations and initialization cost.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #134

2015-11-11 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-2807: Move ThroughputThrottler back to tools jar to fix upgrade

[wangguoz] KAFKA-2763: better stream task assignment

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H11 (Ubuntu ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 124f73b1747a574982e9ca491712e6758ddbacea 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 124f73b1747a574982e9ca491712e6758ddbacea
 > git rev-list a8ccdc6154a1e10982cb80df82e8661903eb9ae5 # timeout=10
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson5919858508383160774.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.1/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:downloadWrapper UP-TO-DATE

BUILD SUCCESSFUL

Total time: 25.642 secs
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson8666573125960424975.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.8/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect-api:clean UP-TO-DATE
:connect-file:clean UP-TO-DATE
:connect-json:clean UP-TO-DATE
:connect-runtime:clean UP-TO-DATE
:connect-tools:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:jar_core_2_10_5
Building project 'core' with Scala version 2.10.5
:kafka-trunk-jdk8:clients:compileJava
:jar_core_2_10_5 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 15.499 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45


[jira] [Created] (KAFKA-2814) Kafka Connect system tests using REST interface fail on AWS

2015-11-11 Thread Ewen Cheslack-Postava (JIRA)
Ewen Cheslack-Postava created KAFKA-2814:


 Summary: Kafka Connect system tests using REST interface fail on 
AWS
 Key: KAFKA-2814
 URL: https://issues.apache.org/jira/browse/KAFKA-2814
 Project: Kafka
  Issue Type: Bug
  Components: copycat, system tests
Reporter: Ewen Cheslack-Postava
Assignee: Ewen Cheslack-Postava
 Fix For: 0.9.1.0


Currently they are using the hostname for the node, which seems sensible but 
isn't guaranteed to work on the driver machine because the hostname is a short 
name assigned by Vagrant and setup using vagrant-hostmanager on each instance, 
but not guaranteed to be setup on the driver machine. Instead we need to use a 
special field which contains a hostname that is routable from the driver but 
may be worse for logging (e.g., an IP address or long EC2 hostname containing 
an encoded version of the IP address).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2813: selector doesn't close socket conn...

2015-11-11 Thread junrao
GitHub user junrao opened a pull request:

https://github.com/apache/kafka/pull/501

KAFKA-2813: selector doesn't close socket connection on non-IOExceptions

Patched Selector.poll() to close the connection on any exception.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/junrao/kafka KAFKA-2813

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/501.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #501


commit 2a4dfd4d63f3b3383d9ce01fce7c2be151ef9f78
Author: Jun Rao 
Date:   2015-11-12T01:11:49Z

KAFKA-2813: selector doesn't close socket connection on non-IOExceptions




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2813) selector doesn't close socket connection on non-IOExceptions

2015-11-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15001441#comment-15001441
 ] 

ASF GitHub Bot commented on KAFKA-2813:
---

GitHub user junrao opened a pull request:

https://github.com/apache/kafka/pull/501

KAFKA-2813: selector doesn't close socket connection on non-IOExceptions

Patched Selector.poll() to close the connection on any exception.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/junrao/kafka KAFKA-2813

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/501.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #501


commit 2a4dfd4d63f3b3383d9ce01fce7c2be151ef9f78
Author: Jun Rao 
Date:   2015-11-12T01:11:49Z

KAFKA-2813: selector doesn't close socket connection on non-IOExceptions




> selector doesn't close socket connection on non-IOExceptions
> 
>
> Key: KAFKA-2813
> URL: https://issues.apache.org/jira/browse/KAFKA-2813
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Jun Rao
>Assignee: Jun Rao
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> When running a system test, we saw lots of entries like the following. The 
> issue is that when the current leader switches to the follower, we will 
> truncate the log in the follower. It's possible there is a concurrent fetch 
> request being served at this moment. If this happens, we throw a 
> KafkaException when trying to send the fetch response (in FileMessageSet). 
> The exception will propagate through Selector.poll(). Selector catches 
> IOException and closes the corresponding socket. However, KafkaException is 
> not an IOException. Since the socket is not closed, Selector.poll() will keep 
> accessing the socket and keep getting the same error.
> [2015-11-11 07:25:01,150] ERROR Processor got uncaught exception. 
> (kafka.network.Processor)
> kafka.common.KafkaException: Size of FileMessageSet 
> /mnt/kafka-data-logs/test_topic-0/.log has been truncated 
> during write: old size 16368, new size 0
> at kafka.log.FileMessageSet.writeTo(FileMessageSet.scala:158)
> at kafka.api.PartitionDataSend.writeTo(FetchResponse.scala:77)
> at 
> org.apache.kafka.common.network.MultiSend.writeTo(MultiSend.java:81)
> at kafka.api.TopicDataSend.writeTo(FetchResponse.scala:148)
> at 
> org.apache.kafka.common.network.MultiSend.writeTo(MultiSend.java:81)
> at kafka.api.FetchResponseSend.writeTo(FetchResponse.scala:291)
> at 
> org.apache.kafka.common.network.KafkaChannel.send(KafkaChannel.java:165)
> at 
> org.apache.kafka.common.network.KafkaChannel.write(KafkaChannel.java:152)
> at org.apache.kafka.common.network.Selector.poll(Selector.java:301)
> at kafka.network.Processor.run(SocketServer.scala:413)
> at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: TRIVIAL: provide clearer error in describe gro...

2015-11-11 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/502

TRIVIAL: provide clearer error in describe group when group is inactive



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka trivial-consumer-groups-fix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/502.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #502


commit 8c53a6747addf95f81b6ec10e7d87712088e2fd6
Author: Jason Gustafson 
Date:   2015-11-12T01:18:03Z

TRIVIAL: provide clearer error in describe group when group is inactive




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk7 #804

2015-11-11 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-2807: Move ThroughputThrottler back to tools jar to fix upgrade

[wangguoz] KAFKA-2763: better stream task assignment

--
[...truncated 3877 lines...]
org.apache.kafka.clients.producer.internals.RecordAccumulatorTest > testFull 
PASSED

org.apache.kafka.clients.producer.internals.RecordAccumulatorTest > 
testAbortIncompleteBatches PASSED

org.apache.kafka.clients.producer.internals.RecordAccumulatorTest > 
testExpiredBatches PASSED

org.apache.kafka.clients.producer.internals.RecordAccumulatorTest > testLinger 
PASSED

org.apache.kafka.clients.producer.internals.RecordAccumulatorTest > 
testPartialDrain PASSED

org.apache.kafka.clients.producer.internals.RecordAccumulatorTest > 
testAppendLarge PASSED

org.apache.kafka.clients.producer.internals.SenderTest > testQuotaMetrics PASSED

org.apache.kafka.clients.producer.internals.SenderTest > testRetries PASSED

org.apache.kafka.clients.producer.internals.SenderTest > testSimple PASSED
:connect-api:checkstyleMain UP-TO-DATE
:connect-api:compileTestJava UP-TO-DATE
:connect-api:processTestResources UP-TO-DATE
:connect-api:testClasses UP-TO-DATE
:connect-api:checkstyleTest UP-TO-DATE
:connect-api:test UP-TO-DATE
:connect-file:checkstyleMain UP-TO-DATE
:connect-file:compileTestJava UP-TO-DATE
:connect-file:processTestResources UP-TO-DATE
:connect-file:testClasses UP-TO-DATE
:connect-file:checkstyleTest UP-TO-DATE
:connect-file:test UP-TO-DATE
:connect-json:checkstyleMain UP-TO-DATE
:connect-json:compileTestJava UP-TO-DATE
:connect-json:processTestResources UP-TO-DATE
:connect-json:testClasses UP-TO-DATE
:connect-json:checkstyleTest UP-TO-DATE
:connect-json:test UP-TO-DATE
:connect-runtime:checkstyleMain UP-TO-DATE
:connect-runtime:compileTestJava UP-TO-DATE
:connect-runtime:processTestResources UP-TO-DATE
:connect-runtime:testClasses UP-TO-DATE
:connect-runtime:checkstyleTest UP-TO-DATE
:connect-runtime:test UP-TO-DATE
:connect-tools:checkstyleMain UP-TO-DATE
:connect-tools:compileTestJava UP-TO-DATE
:connect-tools:processTestResources UP-TO-DATE
:connect-tools:testClasses UP-TO-DATE
:connect-tools:checkstyleTest UP-TO-DATE
:connect-tools:test UP-TO-DATE
:examples:compileTestJava UP-TO-DATE
:examples:processTestResources UP-TO-DATE
:examples:testClasses UP-TO-DATE
:examples:test UP-TO-DATE
:log4j-appender:checkstyleMain
:log4j-appender:compileTestJava
:log4j-appender:processTestResources UP-TO-DATE
:log4j-appender:testClasses
:log4j-appender:checkstyleTest
:log4j-appender:test

org.apache.kafka.log4jappender.KafkaLog4jAppenderTest > testLog4jAppends PASSED

org.apache.kafka.log4jappender.KafkaLog4jAppenderTest > testKafkaLog4jConfigs 
PASSED
:streams:checkstyleMain
:streams:compileTestJavaNote: Some input files use unchecked or unsafe 
operations.
Note: Recompile with -Xlint:unchecked for details.

:streams:processTestResources UP-TO-DATE
:streams:testClasses
:streams:checkstyleTest
:streams:test

org.apache.kafka.streams.processor.DefaultPartitionGrouperTest > testGrouping 
PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testSourceTopics PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkWithSameName PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkWithSelfParent PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddProcessorWithSelfParent PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithSink PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testTopicGroups PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testBuild PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithSource PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSourceWithSameName PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddProcessorWithSameName PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSourceWithSameTopic PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testTopicGroupsByStateStore PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithDuplicates PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkWithWrongParent PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddProcessorWithWrongParent PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testAddStateStore 
PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithNonExistingProcessor PASSED

org.apache.kafka.streams.processor.internals.PartitionGroupTest > 
testTimeTracking PASSED

org.apache.kafka.streams.processor.internals.MinTimestampTrackerTest > 
testTracking PASSED

org.apache.kafka.streams.processor.internals.StreamTaskTest > testProcessOrder 
PASSED

org.apache.kafka.streams.processor.internals.StreamTaskTes

[GitHub] kafka pull request: KAFKA-2805

2015-11-11 Thread MayureshGharat
GitHub user MayureshGharat opened a pull request:

https://github.com/apache/kafka/pull/503

KAFKA-2805

Removed the check for only those expiring batches whose metadata is 
unavailable. Now the batches will be expired irrespective of whether the leader 
is available or not, as soon as it reaches the requestimeout threshold.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/MayureshGharat/kafka kafka-2805

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/503.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #503


commit 573c68b9ee3107a50c96d383e22dd415fc39b96f
Author: Mayuresh Gharat 
Date:   2015-11-12T01:42:46Z

Removed the check for only those expiring batches whose metadata is 
unavailable. Now the batches will be expired irrespective of whether the leader 
is available or not, as soon as it reaches the requestimeout threshold




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2805) RecordAccumulator request timeout not enforced when all brokers are gone

2015-11-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15001495#comment-15001495
 ] 

ASF GitHub Bot commented on KAFKA-2805:
---

GitHub user MayureshGharat opened a pull request:

https://github.com/apache/kafka/pull/503

KAFKA-2805

Removed the check for only those expiring batches whose metadata is 
unavailable. Now the batches will be expired irrespective of whether the leader 
is available or not, as soon as it reaches the requestimeout threshold.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/MayureshGharat/kafka kafka-2805

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/503.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #503


commit 573c68b9ee3107a50c96d383e22dd415fc39b96f
Author: Mayuresh Gharat 
Date:   2015-11-12T01:42:46Z

Removed the check for only those expiring batches whose metadata is 
unavailable. Now the batches will be expired irrespective of whether the leader 
is available or not, as soon as it reaches the requestimeout threshold




> RecordAccumulator request timeout not enforced when all brokers are gone
> 
>
> Key: KAFKA-2805
> URL: https://issues.apache.org/jira/browse/KAFKA-2805
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Mayuresh Gharat
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> When no brokers are left in the cluster, the producer seems not to enforce 
> the request timeout as expected.
> From the user mailing list, the null check in batch expiration in 
> RecordAccumulator seems questionable: 
> https://github.com/apache/kafka/blob/ae5a5d7c08bb634576a414f6f2864c5b8a7e58a3/clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java#L220.
>  
> If this is correct behavior, it is probably worthwhile clarifying the purpose 
> of the check in a comment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2805) RecordAccumulator request timeout not enforced when all brokers are gone

2015-11-11 Thread Mayuresh Gharat (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15001559#comment-15001559
 ] 

Mayuresh Gharat commented on KAFKA-2805:


Yes. That s right. The check was put in place because the KIP-19 explicitly 
mentioned that it should timeout the batches only if the metadata is 
unavailable. I debugged this today and found the same reasoning. I was going to 
write the same explanation before I read this comment. 

My bad, I should have considered that accumulator will not drain the batches 
for NOT ready nodes and the batches will not be expired since the metadata is 
stale and has the leader present. I have uploaded a patch for removing the 
check. 

> RecordAccumulator request timeout not enforced when all brokers are gone
> 
>
> Key: KAFKA-2805
> URL: https://issues.apache.org/jira/browse/KAFKA-2805
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Mayuresh Gharat
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> When no brokers are left in the cluster, the producer seems not to enforce 
> the request timeout as expected.
> From the user mailing list, the null check in batch expiration in 
> RecordAccumulator seems questionable: 
> https://github.com/apache/kafka/blob/ae5a5d7c08bb634576a414f6f2864c5b8a7e58a3/clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java#L220.
>  
> If this is correct behavior, it is probably worthwhile clarifying the purpose 
> of the check in a comment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: Do not collect zk persistent data by de...

2015-11-11 Thread granders
GitHub user granders opened a pull request:

https://github.com/apache/kafka/pull/504

MINOR: Do not collect zk persistent data by default

In system tests zookeeper service, it is overkill and space-intensive to 
collect zookeeper data logs by default. This minor patch turns off default 
collection.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/confluentinc/kafka minor-zk-change-log-collect

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/504.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #504


commit 4f41760e0cb7f71ebeb383b3576e11fe901180a7
Author: Geoff Anderson 
Date:   2015-11-12T02:33:36Z

Do not collect zk persistent data by default




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-2805

2015-11-11 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/503


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2805) RecordAccumulator request timeout not enforced when all brokers are gone

2015-11-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15001598#comment-15001598
 ] 

ASF GitHub Bot commented on KAFKA-2805:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/503


> RecordAccumulator request timeout not enforced when all brokers are gone
> 
>
> Key: KAFKA-2805
> URL: https://issues.apache.org/jira/browse/KAFKA-2805
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Mayuresh Gharat
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> When no brokers are left in the cluster, the producer seems not to enforce 
> the request timeout as expected.
> From the user mailing list, the null check in batch expiration in 
> RecordAccumulator seems questionable: 
> https://github.com/apache/kafka/blob/ae5a5d7c08bb634576a414f6f2864c5b8a7e58a3/clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java#L220.
>  
> If this is correct behavior, it is probably worthwhile clarifying the purpose 
> of the check in a comment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2805) RecordAccumulator request timeout not enforced when all brokers are gone

2015-11-11 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-2805.

Resolution: Fixed

Issue resolved by pull request 503
[https://github.com/apache/kafka/pull/503]

> RecordAccumulator request timeout not enforced when all brokers are gone
> 
>
> Key: KAFKA-2805
> URL: https://issues.apache.org/jira/browse/KAFKA-2805
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Mayuresh Gharat
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> When no brokers are left in the cluster, the producer seems not to enforce 
> the request timeout as expected.
> From the user mailing list, the null check in batch expiration in 
> RecordAccumulator seems questionable: 
> https://github.com/apache/kafka/blob/ae5a5d7c08bb634576a414f6f2864c5b8a7e58a3/clients/src/main/java/org/apache/kafka/clients/producer/internals/RecordAccumulator.java#L220.
>  
> If this is correct behavior, it is probably worthwhile clarifying the purpose 
> of the check in a comment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: Do not collect zk persistent data by de...

2015-11-11 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/504


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-2815) unit test failure in org.apache.kafka.streams.processor.internals.KafkaStreamingPartitionAssignorTest

2015-11-11 Thread Jun Rao (JIRA)
Jun Rao created KAFKA-2815:
--

 Summary: unit test failure in 
org.apache.kafka.streams.processor.internals.KafkaStreamingPartitionAssignorTest
 Key: KAFKA-2815
 URL: https://issues.apache.org/jira/browse/KAFKA-2815
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.9.1.0
Reporter: Jun Rao


See the following failure on trunk.

org.apache.kafka.streams.processor.internals.KafkaStreamingPartitionAssignorTest
 > testSubscription FAILED
java.lang.AssertionError: expected:<[topic1, topic2]> but was:<[topic2, 
topic1]>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:144)
at 
org.apache.kafka.streams.processor.internals.KafkaStreamingPartitionAssignorTest.testSubscription(KafkaStreamingPartitionAssignorTest.java:174)




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #135

2015-11-11 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA2805; RecordAccumulator request timeout not enforced when all

[junrao] MINOR: Do not collect zk persistent data by default

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H11 (Ubuntu ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision df88d3be75396b48a762149af2f4bbcd60fe69b9 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f df88d3be75396b48a762149af2f4bbcd60fe69b9
 > git rev-list 124f73b1747a574982e9ca491712e6758ddbacea # timeout=10
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson693328862629733254.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.1/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:downloadWrapper UP-TO-DATE

BUILD SUCCESSFUL

Total time: 9.282 secs
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson2422552957358994867.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.8/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect-api:clean UP-TO-DATE
:connect-file:clean UP-TO-DATE
:connect-json:clean UP-TO-DATE
:connect-runtime:clean UP-TO-DATE
:connect-tools:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:jar_core_2_10_5
Building project 'core' with Scala version 2.10.5
:kafka-trunk-jdk8:clients:compileJava
:jar_core_2_10_5 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 9.399 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45


Build failed in Jenkins: kafka-trunk-jdk7 #805

2015-11-11 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA2805; RecordAccumulator request timeout not enforced when all

[junrao] MINOR: Do not collect zk persistent data by default

--
[...truncated 1410 lines...]

kafka.log.LogTest > testParseTopicPartitionNameForMissingPartition PASSED

kafka.log.LogTest > testParseTopicPartitionNameForEmptyName PASSED

kafka.log.LogTest > testOpenDeletesObsoleteFiles PASSED

kafka.log.LogTest > testSizeBasedLogRoll PASSED

kafka.log.LogTest > testTimeBasedLogRollJitter PASSED

kafka.log.LogTest > testParseTopicPartitionName PASSED

kafka.log.LogTest > testTruncateTo PASSED

kafka.log.LogTest > testCleanShutdownFile PASSED

kafka.log.OffsetIndexTest > lookupExtremeCases PASSED

kafka.log.OffsetIndexTest > appendTooMany PASSED

kafka.log.OffsetIndexTest > randomLookupTest PASSED

kafka.log.OffsetIndexTest > testReopen PASSED

kafka.log.OffsetIndexTest > appendOutOfOrder PASSED

kafka.log.OffsetIndexTest > truncate PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.CleanerTest > testBuildOffsetMap PASSED

kafka.log.CleanerTest > testSegmentGrouping PASSED

kafka.log.CleanerTest > testCleanSegmentsWithAbort PASSED

kafka.log.CleanerTest > testSegmentGroupingWithSparseOffsets PASSED

kafka.log.CleanerTest > testRecoveryAfterCrash PASSED

kafka.log.CleanerTest > testLogToClean PASSED

kafka.log.CleanerTest > testCleaningWithDeletes PASSED

kafka.log.CleanerTest > testCleanSegments PASSED

kafka.log.CleanerTest > testCleaningWithUnkeyedMessages PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupStable PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatIllegalGeneration 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testDescribeGroupWrongCoordinator PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupRebalancing 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaderFailureInSyncGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testGenerationIdIncrementsOnRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFromIllegalGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testInvalidGroupId PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesStableGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatDuringRebalanceCausesRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentGroupProtocol PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooLarge PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooSmall PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupEmptyAssignment 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetWithDefaultGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchangedLeaderShouldRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesRebalancingGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFollowerAfterLeader PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testCommitOffsetInAwaitingSync 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testJoinGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupFromUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentProtocolType PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetFromUnknownGroup PASSED

kafka.coordin

[GitHub] kafka pull request: more test

2015-11-11 Thread ZoneMayor
Github user ZoneMayor closed the pull request at:

https://github.com/apache/kafka/pull/228


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: more test

2015-11-11 Thread ZoneMayor
GitHub user ZoneMayor reopened a pull request:

https://github.com/apache/kafka/pull/228

more test



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ZoneMayor/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/228.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #228


commit dc23d0a0d724cefd550e95fd7dc58230cf480d0b
Author: jinxing 
Date:   2015-09-21T03:25:37Z

MOD: for test

commit 87ed0b5e71991e756aaeb2d8012b89b1c80b38d1
Author: jinxing 
Date:   2015-09-21T03:42:39Z

MOD: another for test

commit c13d6a5e1920cd5c11e96c524527dfb597f5adc4
Author: jinxing 
Date:   2015-09-21T03:48:09Z

MOD: okok-test

commit 2faa61b10965d9b6e7990158c8e492c711a16fb5
Author: jinxing 
Date:   2015-09-21T03:59:21Z

MOD: more test




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: more test

2015-11-11 Thread ZoneMayor
Github user ZoneMayor closed the pull request at:

https://github.com/apache/kafka/pull/228


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-2813: selector doesn't close socket conn...

2015-11-11 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/501


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-2813) selector doesn't close socket connection on non-IOExceptions

2015-11-11 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-2813.

Resolution: Fixed

Issue resolved by pull request 501
[https://github.com/apache/kafka/pull/501]

> selector doesn't close socket connection on non-IOExceptions
> 
>
> Key: KAFKA-2813
> URL: https://issues.apache.org/jira/browse/KAFKA-2813
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Jun Rao
>Assignee: Jun Rao
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> When running a system test, we saw lots of entries like the following. The 
> issue is that when the current leader switches to the follower, we will 
> truncate the log in the follower. It's possible there is a concurrent fetch 
> request being served at this moment. If this happens, we throw a 
> KafkaException when trying to send the fetch response (in FileMessageSet). 
> The exception will propagate through Selector.poll(). Selector catches 
> IOException and closes the corresponding socket. However, KafkaException is 
> not an IOException. Since the socket is not closed, Selector.poll() will keep 
> accessing the socket and keep getting the same error.
> [2015-11-11 07:25:01,150] ERROR Processor got uncaught exception. 
> (kafka.network.Processor)
> kafka.common.KafkaException: Size of FileMessageSet 
> /mnt/kafka-data-logs/test_topic-0/.log has been truncated 
> during write: old size 16368, new size 0
> at kafka.log.FileMessageSet.writeTo(FileMessageSet.scala:158)
> at kafka.api.PartitionDataSend.writeTo(FetchResponse.scala:77)
> at 
> org.apache.kafka.common.network.MultiSend.writeTo(MultiSend.java:81)
> at kafka.api.TopicDataSend.writeTo(FetchResponse.scala:148)
> at 
> org.apache.kafka.common.network.MultiSend.writeTo(MultiSend.java:81)
> at kafka.api.FetchResponseSend.writeTo(FetchResponse.scala:291)
> at 
> org.apache.kafka.common.network.KafkaChannel.send(KafkaChannel.java:165)
> at 
> org.apache.kafka.common.network.KafkaChannel.write(KafkaChannel.java:152)
> at org.apache.kafka.common.network.Selector.poll(Selector.java:301)
> at kafka.network.Processor.run(SocketServer.scala:413)
> at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2813) selector doesn't close socket connection on non-IOExceptions

2015-11-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15001743#comment-15001743
 ] 

ASF GitHub Bot commented on KAFKA-2813:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/501


> selector doesn't close socket connection on non-IOExceptions
> 
>
> Key: KAFKA-2813
> URL: https://issues.apache.org/jira/browse/KAFKA-2813
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Jun Rao
>Assignee: Jun Rao
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> When running a system test, we saw lots of entries like the following. The 
> issue is that when the current leader switches to the follower, we will 
> truncate the log in the follower. It's possible there is a concurrent fetch 
> request being served at this moment. If this happens, we throw a 
> KafkaException when trying to send the fetch response (in FileMessageSet). 
> The exception will propagate through Selector.poll(). Selector catches 
> IOException and closes the corresponding socket. However, KafkaException is 
> not an IOException. Since the socket is not closed, Selector.poll() will keep 
> accessing the socket and keep getting the same error.
> [2015-11-11 07:25:01,150] ERROR Processor got uncaught exception. 
> (kafka.network.Processor)
> kafka.common.KafkaException: Size of FileMessageSet 
> /mnt/kafka-data-logs/test_topic-0/.log has been truncated 
> during write: old size 16368, new size 0
> at kafka.log.FileMessageSet.writeTo(FileMessageSet.scala:158)
> at kafka.api.PartitionDataSend.writeTo(FetchResponse.scala:77)
> at 
> org.apache.kafka.common.network.MultiSend.writeTo(MultiSend.java:81)
> at kafka.api.TopicDataSend.writeTo(FetchResponse.scala:148)
> at 
> org.apache.kafka.common.network.MultiSend.writeTo(MultiSend.java:81)
> at kafka.api.FetchResponseSend.writeTo(FetchResponse.scala:291)
> at 
> org.apache.kafka.common.network.KafkaChannel.send(KafkaChannel.java:165)
> at 
> org.apache.kafka.common.network.KafkaChannel.write(KafkaChannel.java:152)
> at org.apache.kafka.common.network.Selector.poll(Selector.java:301)
> at kafka.network.Processor.run(SocketServer.scala:413)
> at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2816) selector.poll() no longer throws IOException

2015-11-11 Thread Jun Rao (JIRA)
Jun Rao created KAFKA-2816:
--

 Summary: selector.poll() no longer throws IOException
 Key: KAFKA-2816
 URL: https://issues.apache.org/jira/browse/KAFKA-2816
 Project: Kafka
  Issue Type: Improvement
Reporter: Jun Rao
 Fix For: 0.9.1.0


We can remove the IOException from the signature of Selectable.poll().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2816) selector.poll() no longer throws IOException

2015-11-11 Thread Dong Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15001768#comment-15001768
 ] 

Dong Lin commented on KAFKA-2816:
-

I just checked the code. poll(timeout) calls select(timeout), which in turn 
calls nioSelector.select(). And nioSelector.select() will throw IOException. 
Therefore select.pol() will still throw IOException, right?

> selector.poll() no longer throws IOException
> 
>
> Key: KAFKA-2816
> URL: https://issues.apache.org/jira/browse/KAFKA-2816
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jun Rao
> Fix For: 0.9.1.0
>
>
> We can remove the IOException from the signature of Selectable.poll().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)