[jira] [Commented] (KAFKA-2730) partition-reassignment tool stops working due to error in registerMetric

2015-11-04 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14989080#comment-14989080
 ] 

Guozhang Wang commented on KAFKA-2730:
--

Thanks [~Ormod], could you try the following in your experiment:

1) set "num.replica.fetchers" to 1 and re-run your settings.
2) still keep "num.replica.fetchers" as 4 and apply the patch below and re-run 
your settings.

> partition-reassignment tool stops working due to error in registerMetric
> 
>
> Key: KAFKA-2730
> URL: https://issues.apache.org/jira/browse/KAFKA-2730
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Jun Rao
>Assignee: Guozhang Wang
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> I updated our test system to use Kafka from latest revision 
> 7c33475274cb6e65a8e8d907e7fef6e56bc8c8e6 and now I'm seeing:
> [2015-11-03 14:07:01,554] ERROR [KafkaApi-2] error when handling request 
> Name:LeaderAndIsrRequest;Version:0;Controller:3;ControllerEpoch:1;CorrelationId:5;ClientId:3;Leaders:BrokerEndPoint(3,192.168.60.168,21769);PartitionState:(5c700e33-9230-4219-a3e1-42574c175d62-logs,0)
>  -> 
> (LeaderAndIsrInfo:(Leader:3,ISR:3,LeaderEpoch:1,ControllerEpoch:1),ReplicationFactor:3),AllReplicas:2,3,1)
>  (kafka.server.KafkaApis)
> java.lang.IllegalArgumentException: A metric named 'MetricName 
> [name=connection-close-rate, group=replica-fetcher-metrics, 
> description=Connections closed per second in the window., 
> tags={broker-id=3}]' already exists, can't register another one.
> at org.apache.kafka.common.metrics.Metrics.registerMetric(Metrics.java:285)
> at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:177)
> at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:162)
> at 
> org.apache.kafka.common.network.Selector$SelectorMetrics.(Selector.java:578)
> at org.apache.kafka.common.network.Selector.(Selector.java:112)
> at kafka.server.ReplicaFetcherThread.(ReplicaFetcherThread.scala:69)
> at 
> kafka.server.ReplicaFetcherManager.createFetcherThread(ReplicaFetcherManager.scala:35)
> at 
> kafka.server.AbstractFetcherManager$$anonfun$addFetcherForPartitions$2.apply(AbstractFetcherManager.scala:83)
> at 
> kafka.server.AbstractFetcherManager$$anonfun$addFetcherForPartitions$2.apply(AbstractFetcherManager.scala:78)
> at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
> at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
> at 
> kafka.server.AbstractFetcherManager.addFetcherForPartitions(AbstractFetcherManager.scala:78)
> at kafka.server.ReplicaManager.makeFollowers(ReplicaManager.scala:791)
> at 
> kafka.server.ReplicaManager.becomeLeaderOrFollower(ReplicaManager.scala:628)
> at kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:114)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:71)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> at java.lang.Thread.run(Thread.java:745)
> This happens when I'm running kafka-reassign-partitions.sh. As a result in 
> the verify command one of the partition reassignments says "is still in 
> progress" forever.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2730) partition-reassignment tool stops working due to error in registerMetric

2015-11-04 Thread Hannu Valtonen (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14989083#comment-14989083
 ] 

Hannu Valtonen commented on KAFKA-2730:
---

Sure, will do. I'll report back in a couple of hours.

> partition-reassignment tool stops working due to error in registerMetric
> 
>
> Key: KAFKA-2730
> URL: https://issues.apache.org/jira/browse/KAFKA-2730
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Jun Rao
>Assignee: Guozhang Wang
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> I updated our test system to use Kafka from latest revision 
> 7c33475274cb6e65a8e8d907e7fef6e56bc8c8e6 and now I'm seeing:
> [2015-11-03 14:07:01,554] ERROR [KafkaApi-2] error when handling request 
> Name:LeaderAndIsrRequest;Version:0;Controller:3;ControllerEpoch:1;CorrelationId:5;ClientId:3;Leaders:BrokerEndPoint(3,192.168.60.168,21769);PartitionState:(5c700e33-9230-4219-a3e1-42574c175d62-logs,0)
>  -> 
> (LeaderAndIsrInfo:(Leader:3,ISR:3,LeaderEpoch:1,ControllerEpoch:1),ReplicationFactor:3),AllReplicas:2,3,1)
>  (kafka.server.KafkaApis)
> java.lang.IllegalArgumentException: A metric named 'MetricName 
> [name=connection-close-rate, group=replica-fetcher-metrics, 
> description=Connections closed per second in the window., 
> tags={broker-id=3}]' already exists, can't register another one.
> at org.apache.kafka.common.metrics.Metrics.registerMetric(Metrics.java:285)
> at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:177)
> at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:162)
> at 
> org.apache.kafka.common.network.Selector$SelectorMetrics.(Selector.java:578)
> at org.apache.kafka.common.network.Selector.(Selector.java:112)
> at kafka.server.ReplicaFetcherThread.(ReplicaFetcherThread.scala:69)
> at 
> kafka.server.ReplicaFetcherManager.createFetcherThread(ReplicaFetcherManager.scala:35)
> at 
> kafka.server.AbstractFetcherManager$$anonfun$addFetcherForPartitions$2.apply(AbstractFetcherManager.scala:83)
> at 
> kafka.server.AbstractFetcherManager$$anonfun$addFetcherForPartitions$2.apply(AbstractFetcherManager.scala:78)
> at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
> at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
> at 
> kafka.server.AbstractFetcherManager.addFetcherForPartitions(AbstractFetcherManager.scala:78)
> at kafka.server.ReplicaManager.makeFollowers(ReplicaManager.scala:791)
> at 
> kafka.server.ReplicaManager.becomeLeaderOrFollower(ReplicaManager.scala:628)
> at kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:114)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:71)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> at java.lang.Thread.run(Thread.java:745)
> This happens when I'm running kafka-reassign-partitions.sh. As a result in 
> the verify command one of the partition reassignments says "is still in 
> progress" forever.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-2730) partition-reassignment tool stops working due to error in registerMetric

2015-11-04 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14989080#comment-14989080
 ] 

Guozhang Wang edited comment on KAFKA-2730 at 11/4/15 7:59 AM:
---

Thanks [~Ormod], could you try the following in your experiment:

1) set "num.replica.fetchers" to 1 and re-run your settings.
2) still keep "num.replica.fetchers" as 4 and apply the patch below and re-run 
your settings:

https://github.com/apache/kafka/pull/416


was (Author: guozhang):
Thanks [~Ormod], could you try the following in your experiment:

1) set "num.replica.fetchers" to 1 and re-run your settings.
2) still keep "num.replica.fetchers" as 4 and apply the patch below and re-run 
your settings.

> partition-reassignment tool stops working due to error in registerMetric
> 
>
> Key: KAFKA-2730
> URL: https://issues.apache.org/jira/browse/KAFKA-2730
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Jun Rao
>Assignee: Guozhang Wang
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> I updated our test system to use Kafka from latest revision 
> 7c33475274cb6e65a8e8d907e7fef6e56bc8c8e6 and now I'm seeing:
> [2015-11-03 14:07:01,554] ERROR [KafkaApi-2] error when handling request 
> Name:LeaderAndIsrRequest;Version:0;Controller:3;ControllerEpoch:1;CorrelationId:5;ClientId:3;Leaders:BrokerEndPoint(3,192.168.60.168,21769);PartitionState:(5c700e33-9230-4219-a3e1-42574c175d62-logs,0)
>  -> 
> (LeaderAndIsrInfo:(Leader:3,ISR:3,LeaderEpoch:1,ControllerEpoch:1),ReplicationFactor:3),AllReplicas:2,3,1)
>  (kafka.server.KafkaApis)
> java.lang.IllegalArgumentException: A metric named 'MetricName 
> [name=connection-close-rate, group=replica-fetcher-metrics, 
> description=Connections closed per second in the window., 
> tags={broker-id=3}]' already exists, can't register another one.
> at org.apache.kafka.common.metrics.Metrics.registerMetric(Metrics.java:285)
> at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:177)
> at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:162)
> at 
> org.apache.kafka.common.network.Selector$SelectorMetrics.(Selector.java:578)
> at org.apache.kafka.common.network.Selector.(Selector.java:112)
> at kafka.server.ReplicaFetcherThread.(ReplicaFetcherThread.scala:69)
> at 
> kafka.server.ReplicaFetcherManager.createFetcherThread(ReplicaFetcherManager.scala:35)
> at 
> kafka.server.AbstractFetcherManager$$anonfun$addFetcherForPartitions$2.apply(AbstractFetcherManager.scala:83)
> at 
> kafka.server.AbstractFetcherManager$$anonfun$addFetcherForPartitions$2.apply(AbstractFetcherManager.scala:78)
> at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
> at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
> at 
> kafka.server.AbstractFetcherManager.addFetcherForPartitions(AbstractFetcherManager.scala:78)
> at kafka.server.ReplicaManager.makeFollowers(ReplicaManager.scala:791)
> at 
> kafka.server.ReplicaManager.becomeLeaderOrFollower(ReplicaManager.scala:628)
> at kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:114)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:71)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> at java.lang.Thread.run(Thread.java:745)
> This happens when I'm running kafka-reassign-partitions.sh. As a result in 
> the verify command one of the partition reassignments says "is still in 
> progress" forever.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2740) Convert Windows bin scripts from CRLF to LF line encodings

2015-11-04 Thread Michael Noll (JIRA)
Michael Noll created KAFKA-2740:
---

 Summary: Convert Windows bin scripts from CRLF to LF line encodings
 Key: KAFKA-2740
 URL: https://issues.apache.org/jira/browse/KAFKA-2740
 Project: Kafka
  Issue Type: Improvement
  Components: core
Affects Versions: 0.8.2.1
Reporter: Michael Noll
Priority: Minor


We currently have an inconsistency with regards to line encodings of the 
Windows bin/ scripts.  All but three Windows scripts use the same line 
encodings as we do for the rest of the code (LF aka Unix style).

The three Windows scripts that use CRLF encoding (aka Windows style) are:

* kafka-run-class.bat
* kafka-server-stop.bat
* zookeeper-server-stop.bat

I'd suggest we convert the three scripts above from CRLF to LF.  This will not 
only restore consistency, it will also (and more importantly) resolve woes 
caused by line encoding differences when diffing, patching, etc. these 
files/code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: Convert Windows bin scripts from CRLF to LF li...

2015-11-04 Thread miguno
GitHub user miguno opened a pull request:

https://github.com/apache/kafka/pull/417

Convert Windows bin scripts from CRLF to LF line encodings

There are no functional changes to the modified scripts.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/miguno/kafka KAFKA-2740

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/417.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #417


commit 777f4cbd11b19f8c91aed4b3c48b48e417288c30
Author: Michael G. Noll 
Date:   2015-11-04T08:47:14Z

Convert Windows bin scripts from CRLF to LF line encodings




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-2740: Convert Windows bin scripts from C...

2015-11-04 Thread miguno
Github user miguno closed the pull request at:

https://github.com/apache/kafka/pull/417


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2740) Convert Windows bin scripts from CRLF to LF line encodings

2015-11-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14989137#comment-14989137
 ] 

ASF GitHub Bot commented on KAFKA-2740:
---

Github user miguno closed the pull request at:

https://github.com/apache/kafka/pull/417


> Convert Windows bin scripts from CRLF to LF line encodings
> --
>
> Key: KAFKA-2740
> URL: https://issues.apache.org/jira/browse/KAFKA-2740
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.8.2.1
>Reporter: Michael Noll
>Priority: Minor
>
> We currently have an inconsistency with regards to line encodings of the 
> Windows bin/ scripts.  All but three Windows scripts use the same line 
> encodings as we do for the rest of the code (LF aka Unix style).
> The three Windows scripts that use CRLF encoding (aka Windows style) are:
> * kafka-run-class.bat
> * kafka-server-stop.bat
> * zookeeper-server-stop.bat
> I'd suggest we convert the three scripts above from CRLF to LF.  This will 
> not only restore consistency, it will also (and more importantly) resolve 
> woes caused by line encoding differences when diffing, patching, etc. these 
> files/code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2470: Convert Windows bin scripts from C...

2015-11-04 Thread miguno
GitHub user miguno opened a pull request:

https://github.com/apache/kafka/pull/418

KAFKA-2470: Convert Windows bin scripts from CRLF to LF line encodings

There are no functional changes to the modified scripts.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/miguno/kafka KAFKA-2740

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/418.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #418


commit 777f4cbd11b19f8c91aed4b3c48b48e417288c30
Author: Michael G. Noll 
Date:   2015-11-04T08:47:14Z

Convert Windows bin scripts from CRLF to LF line encodings




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2470) kafka-producer-perf-test.sh can't configure all to request-num-acks

2015-11-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14989142#comment-14989142
 ] 

ASF GitHub Bot commented on KAFKA-2470:
---

GitHub user miguno opened a pull request:

https://github.com/apache/kafka/pull/418

KAFKA-2470: Convert Windows bin scripts from CRLF to LF line encodings

There are no functional changes to the modified scripts.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/miguno/kafka KAFKA-2740

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/418.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #418


commit 777f4cbd11b19f8c91aed4b3c48b48e417288c30
Author: Michael G. Noll 
Date:   2015-11-04T08:47:14Z

Convert Windows bin scripts from CRLF to LF line encodings




> kafka-producer-perf-test.sh can't configure all to request-num-acks
> ---
>
> Key: KAFKA-2470
> URL: https://issues.apache.org/jira/browse/KAFKA-2470
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, tools
>Affects Versions: 0.8.2.1
> Environment: Linux
>Reporter: Bo Wang
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> For New Producer API, kafka-producer-perf-test.sh can't configure all to 
> request-num-acks :
> bin]# ./kafka-producer-perf-test.sh --topic test --broker-list host:port 
> --messages 100 --message-size 200 --new-producer --sync --batch-size 1
>  --request-num-acks all
> Exception in thread "main" joptsimple.OptionArgumentConversionException: 
> Cannot convert argument 'all' of option ['request-num-acks'] to class 
> java.lang.Integer
>   at 
> joptsimple.ArgumentAcceptingOptionSpec.convert(ArgumentAcceptingOptionSpec.java:237)
>   at joptsimple.OptionSet.valuesOf(OptionSet.java:226)
>   at joptsimple.OptionSet.valueOf(OptionSet.java:170)
>   at 
> kafka.tools.ProducerPerformance$ProducerPerfConfig.(ProducerPerformance.scala:146)
>   at kafka.tools.ProducerPerformance$.main(ProducerPerformance.scala:42)
>   at kafka.tools.ProducerPerformance.main(ProducerPerformance.scala)
> Caused by: joptsimple.internal.ReflectionException: 
> java.lang.NumberFormatException: For input string: "all"
>   at 
> joptsimple.internal.Reflection.reflectionException(Reflection.java:136)
>   at joptsimple.internal.Reflection.invoke(Reflection.java:123)
>   at 
> joptsimple.internal.MethodInvokingValueConverter.convert(MethodInvokingValueConverter.java:48)
>   at 
> joptsimple.ArgumentAcceptingOptionSpec.convert(ArgumentAcceptingOptionSpec.java:234)
>   ... 5 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2470: Convert Windows bin scripts from C...

2015-11-04 Thread miguno
Github user miguno closed the pull request at:

https://github.com/apache/kafka/pull/418


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2470) kafka-producer-perf-test.sh can't configure all to request-num-acks

2015-11-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14989154#comment-14989154
 ] 

ASF GitHub Bot commented on KAFKA-2470:
---

Github user miguno closed the pull request at:

https://github.com/apache/kafka/pull/418


> kafka-producer-perf-test.sh can't configure all to request-num-acks
> ---
>
> Key: KAFKA-2470
> URL: https://issues.apache.org/jira/browse/KAFKA-2470
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, tools
>Affects Versions: 0.8.2.1
> Environment: Linux
>Reporter: Bo Wang
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> For New Producer API, kafka-producer-perf-test.sh can't configure all to 
> request-num-acks :
> bin]# ./kafka-producer-perf-test.sh --topic test --broker-list host:port 
> --messages 100 --message-size 200 --new-producer --sync --batch-size 1
>  --request-num-acks all
> Exception in thread "main" joptsimple.OptionArgumentConversionException: 
> Cannot convert argument 'all' of option ['request-num-acks'] to class 
> java.lang.Integer
>   at 
> joptsimple.ArgumentAcceptingOptionSpec.convert(ArgumentAcceptingOptionSpec.java:237)
>   at joptsimple.OptionSet.valuesOf(OptionSet.java:226)
>   at joptsimple.OptionSet.valueOf(OptionSet.java:170)
>   at 
> kafka.tools.ProducerPerformance$ProducerPerfConfig.(ProducerPerformance.scala:146)
>   at kafka.tools.ProducerPerformance$.main(ProducerPerformance.scala:42)
>   at kafka.tools.ProducerPerformance.main(ProducerPerformance.scala)
> Caused by: joptsimple.internal.ReflectionException: 
> java.lang.NumberFormatException: For input string: "all"
>   at 
> joptsimple.internal.Reflection.reflectionException(Reflection.java:136)
>   at joptsimple.internal.Reflection.invoke(Reflection.java:123)
>   at 
> joptsimple.internal.MethodInvokingValueConverter.convert(MethodInvokingValueConverter.java:48)
>   at 
> joptsimple.ArgumentAcceptingOptionSpec.convert(ArgumentAcceptingOptionSpec.java:234)
>   ... 5 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2740: Convert Windows bin scripts from C...

2015-11-04 Thread miguno
GitHub user miguno opened a pull request:

https://github.com/apache/kafka/pull/419

KAFKA-2740: Convert Windows bin scripts from CRLF to LF line encodings

There are no functional changes to the modified scripts.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/miguno/kafka KAFKA-2740

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/419.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #419


commit 777f4cbd11b19f8c91aed4b3c48b48e417288c30
Author: Michael G. Noll 
Date:   2015-11-04T08:47:14Z

Convert Windows bin scripts from CRLF to LF line encodings




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2740) Convert Windows bin scripts from CRLF to LF line encodings

2015-11-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14989155#comment-14989155
 ] 

ASF GitHub Bot commented on KAFKA-2740:
---

GitHub user miguno opened a pull request:

https://github.com/apache/kafka/pull/419

KAFKA-2740: Convert Windows bin scripts from CRLF to LF line encodings

There are no functional changes to the modified scripts.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/miguno/kafka KAFKA-2740

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/419.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #419


commit 777f4cbd11b19f8c91aed4b3c48b48e417288c30
Author: Michael G. Noll 
Date:   2015-11-04T08:47:14Z

Convert Windows bin scripts from CRLF to LF line encodings




> Convert Windows bin scripts from CRLF to LF line encodings
> --
>
> Key: KAFKA-2740
> URL: https://issues.apache.org/jira/browse/KAFKA-2740
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.8.2.1
>Reporter: Michael Noll
>Priority: Minor
>
> We currently have an inconsistency with regards to line encodings of the 
> Windows bin/ scripts.  All but three Windows scripts use the same line 
> encodings as we do for the rest of the code (LF aka Unix style).
> The three Windows scripts that use CRLF encoding (aka Windows style) are:
> * kafka-run-class.bat
> * kafka-server-stop.bat
> * zookeeper-server-stop.bat
> I'd suggest we convert the three scripts above from CRLF to LF.  This will 
> not only restore consistency, it will also (and more importantly) resolve 
> woes caused by line encoding differences when diffing, patching, etc. these 
> files/code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2740) Convert Windows bin scripts from CRLF to LF line encodings

2015-11-04 Thread Michael Noll (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Noll updated KAFKA-2740:

Status: Patch Available  (was: Open)

> Convert Windows bin scripts from CRLF to LF line encodings
> --
>
> Key: KAFKA-2740
> URL: https://issues.apache.org/jira/browse/KAFKA-2740
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.8.2.1
>Reporter: Michael Noll
>Priority: Minor
>
> We currently have an inconsistency with regards to line encodings of the 
> Windows bin/ scripts.  All but three Windows scripts use the same line 
> encodings as we do for the rest of the code (LF aka Unix style).
> The three Windows scripts that use CRLF encoding (aka Windows style) are:
> * kafka-run-class.bat
> * kafka-server-stop.bat
> * zookeeper-server-stop.bat
> I'd suggest we convert the three scripts above from CRLF to LF.  This will 
> not only restore consistency, it will also (and more importantly) resolve 
> woes caused by line encoding differences when diffing, patching, etc. these 
> files/code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2470) kafka-producer-perf-test.sh can't configure all to request-num-acks

2015-11-04 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy resolved KAFKA-2470.

   Resolution: Fixed
Fix Version/s: 0.9.0.0

This is got fixed in KAFKA-2562. Now  kafka-producer-perf-test.sh  uses new 
ProducerPerformance.java for testing.

> kafka-producer-perf-test.sh can't configure all to request-num-acks
> ---
>
> Key: KAFKA-2470
> URL: https://issues.apache.org/jira/browse/KAFKA-2470
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, tools
>Affects Versions: 0.8.2.1
> Environment: Linux
>Reporter: Bo Wang
> Fix For: 0.9.0.0
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> For New Producer API, kafka-producer-perf-test.sh can't configure all to 
> request-num-acks :
> bin]# ./kafka-producer-perf-test.sh --topic test --broker-list host:port 
> --messages 100 --message-size 200 --new-producer --sync --batch-size 1
>  --request-num-acks all
> Exception in thread "main" joptsimple.OptionArgumentConversionException: 
> Cannot convert argument 'all' of option ['request-num-acks'] to class 
> java.lang.Integer
>   at 
> joptsimple.ArgumentAcceptingOptionSpec.convert(ArgumentAcceptingOptionSpec.java:237)
>   at joptsimple.OptionSet.valuesOf(OptionSet.java:226)
>   at joptsimple.OptionSet.valueOf(OptionSet.java:170)
>   at 
> kafka.tools.ProducerPerformance$ProducerPerfConfig.(ProducerPerformance.scala:146)
>   at kafka.tools.ProducerPerformance$.main(ProducerPerformance.scala:42)
>   at kafka.tools.ProducerPerformance.main(ProducerPerformance.scala)
> Caused by: joptsimple.internal.ReflectionException: 
> java.lang.NumberFormatException: For input string: "all"
>   at 
> joptsimple.internal.Reflection.reflectionException(Reflection.java:136)
>   at joptsimple.internal.Reflection.invoke(Reflection.java:123)
>   at 
> joptsimple.internal.MethodInvokingValueConverter.convert(MethodInvokingValueConverter.java:48)
>   at 
> joptsimple.ArgumentAcceptingOptionSpec.convert(ArgumentAcceptingOptionSpec.java:234)
>   ... 5 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2728) kafka-run-class.sh: incorrect path to tools-log4j.properties for KAFKA_LOG4J_OPTS

2015-11-04 Thread Michael Noll (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14989185#comment-14989185
 ] 

Michael Noll commented on KAFKA-2728:
-

Rechecking the code I found that kafka-run-class.sh uses a different way to set 
up {{base_dir}} than all the other *.sh scripts, which makes the current code 
to work correctly.

> kafka-run-class.sh: incorrect path to tools-log4j.properties for 
> KAFKA_LOG4J_OPTS
> -
>
> Key: KAFKA-2728
> URL: https://issues.apache.org/jira/browse/KAFKA-2728
> Project: Kafka
>  Issue Type: Bug
>  Components: config, core
>Affects Versions: 0.9.0.0
>Reporter: Michael Noll
>
> I noticed that the {{bin/kafka-run-class.sh}} and the 
> {{bin/windows/kafka-run-class.bat}} scripts in current trunk (as of commit 
> e466ccd) seems to set up the KAFKA_LOG4J_OPTS environment variable 
> incorrectly.  Noticeably, the way to construct the path to 
> {{config/tools-log4j.properties}} is wrong, and it is inconsistent to how the 
> other bin scripts configure the paths to their {{config/*.properties}} files.
> Example: bin/kafka-run-class.sh (one of the two buggy scripts)
> {code}
> if [ -z "$KAFKA_LOG4J_OPTS" ]; then
>   # Log to console. This is a tool.
>   
> KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/config/tools-log4j.properties"
> else
>   ...snip...
> {code}
> Example: bin/kafka-server-start.sh (a correct script)
> {code}
> if [ "x$KAFKA_LOG4J_OPTS" = "x" ]; then
> export 
> KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/log4j.properties"
> fi
> {code}
> In the examples above, note the difference between:
> {code}
> # Without ".."
> file:$base_dir/config/tools-log4j.properties
> # With ".."
> file:$base_dir/../config/log4j.properties
> {code}
> *How to fix*
> Set up {{KAFKA_LOG4J_OPTS}} as in {{kafka-run-class.sh}} follows:
> {code}
> KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/tools-log4j.properties"
> {code}
> Set up {{KAFKA_LOG4J_OPTS}} as in {{kafka-run-class.bat}} follows (careful, I 
> am not that familiar with Windows .bat scripting):
> {code}
> set 
> KAFKA_LOG4J_OPTS=-Dlog4j.configuration=file:%BASE_DIR%/../config/tools-log4j.properties
> {code}
> Alternatively, for the windows script, we could use the same code variant we 
> use in e.g. {{kafka-server-start.bat}}, where we use {{~dp0}} instead of 
> {{BASE_DIR}} (I'd opt for this variant so that the windows scripts are 
> consistent):
> {code}
> set 
> KAFKA_LOG4J_OPTS=-Dlog4j.configuration=file:%~dp0../../config/tools-log4j.properties
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2728) kafka-run-class.sh: incorrect path to tools-log4j.properties for KAFKA_LOG4J_OPTS

2015-11-04 Thread Michael Noll (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Noll resolved KAFKA-2728.
-
Resolution: Invalid

> kafka-run-class.sh: incorrect path to tools-log4j.properties for 
> KAFKA_LOG4J_OPTS
> -
>
> Key: KAFKA-2728
> URL: https://issues.apache.org/jira/browse/KAFKA-2728
> Project: Kafka
>  Issue Type: Bug
>  Components: config, core
>Affects Versions: 0.9.0.0
>Reporter: Michael Noll
>
> I noticed that the {{bin/kafka-run-class.sh}} and the 
> {{bin/windows/kafka-run-class.bat}} scripts in current trunk (as of commit 
> e466ccd) seems to set up the KAFKA_LOG4J_OPTS environment variable 
> incorrectly.  Noticeably, the way to construct the path to 
> {{config/tools-log4j.properties}} is wrong, and it is inconsistent to how the 
> other bin scripts configure the paths to their {{config/*.properties}} files.
> Example: bin/kafka-run-class.sh (one of the two buggy scripts)
> {code}
> if [ -z "$KAFKA_LOG4J_OPTS" ]; then
>   # Log to console. This is a tool.
>   
> KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/config/tools-log4j.properties"
> else
>   ...snip...
> {code}
> Example: bin/kafka-server-start.sh (a correct script)
> {code}
> if [ "x$KAFKA_LOG4J_OPTS" = "x" ]; then
> export 
> KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/log4j.properties"
> fi
> {code}
> In the examples above, note the difference between:
> {code}
> # Without ".."
> file:$base_dir/config/tools-log4j.properties
> # With ".."
> file:$base_dir/../config/log4j.properties
> {code}
> *How to fix*
> Set up {{KAFKA_LOG4J_OPTS}} as in {{kafka-run-class.sh}} follows:
> {code}
> KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/tools-log4j.properties"
> {code}
> Set up {{KAFKA_LOG4J_OPTS}} as in {{kafka-run-class.bat}} follows (careful, I 
> am not that familiar with Windows .bat scripting):
> {code}
> set 
> KAFKA_LOG4J_OPTS=-Dlog4j.configuration=file:%BASE_DIR%/../config/tools-log4j.properties
> {code}
> Alternatively, for the windows script, we could use the same code variant we 
> use in e.g. {{kafka-server-start.bat}}, where we use {{~dp0}} instead of 
> {{BASE_DIR}} (I'd opt for this variant so that the windows scripts are 
> consistent):
> {code}
> set 
> KAFKA_LOG4J_OPTS=-Dlog4j.configuration=file:%~dp0../../config/tools-log4j.properties
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2736) ZkClient doesn't handle SaslAuthenticated

2015-11-04 Thread Flavio Junqueira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14989406#comment-14989406
 ] 

Flavio Junqueira commented on KAFKA-2736:
-

I've created a pull request for zkclient 

https://github.com/sgroschupf/zkclient/pull/39

> ZkClient doesn't handle SaslAuthenticated
> -
>
> Key: KAFKA-2736
> URL: https://issues.apache.org/jira/browse/KAFKA-2736
> Project: Kafka
>  Issue Type: Bug
>  Components: zkclient
>Affects Versions: 0.9.0.0
>Reporter: Flavio Junqueira
>
> See https://github.com/sgroschupf/zkclient/issues/38



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2730) partition-reassignment tool stops working due to error in registerMetric

2015-11-04 Thread Hannu Valtonen (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14989415#comment-14989415
 ] 

Hannu Valtonen commented on KAFKA-2730:
---

Both the workaround (solution 1) and the actual patch (2) seem to work fine 
when testing by hand or against our automated tests. 

Thanks for the quick solution.

> partition-reassignment tool stops working due to error in registerMetric
> 
>
> Key: KAFKA-2730
> URL: https://issues.apache.org/jira/browse/KAFKA-2730
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Jun Rao
>Assignee: Guozhang Wang
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> I updated our test system to use Kafka from latest revision 
> 7c33475274cb6e65a8e8d907e7fef6e56bc8c8e6 and now I'm seeing:
> [2015-11-03 14:07:01,554] ERROR [KafkaApi-2] error when handling request 
> Name:LeaderAndIsrRequest;Version:0;Controller:3;ControllerEpoch:1;CorrelationId:5;ClientId:3;Leaders:BrokerEndPoint(3,192.168.60.168,21769);PartitionState:(5c700e33-9230-4219-a3e1-42574c175d62-logs,0)
>  -> 
> (LeaderAndIsrInfo:(Leader:3,ISR:3,LeaderEpoch:1,ControllerEpoch:1),ReplicationFactor:3),AllReplicas:2,3,1)
>  (kafka.server.KafkaApis)
> java.lang.IllegalArgumentException: A metric named 'MetricName 
> [name=connection-close-rate, group=replica-fetcher-metrics, 
> description=Connections closed per second in the window., 
> tags={broker-id=3}]' already exists, can't register another one.
> at org.apache.kafka.common.metrics.Metrics.registerMetric(Metrics.java:285)
> at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:177)
> at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:162)
> at 
> org.apache.kafka.common.network.Selector$SelectorMetrics.(Selector.java:578)
> at org.apache.kafka.common.network.Selector.(Selector.java:112)
> at kafka.server.ReplicaFetcherThread.(ReplicaFetcherThread.scala:69)
> at 
> kafka.server.ReplicaFetcherManager.createFetcherThread(ReplicaFetcherManager.scala:35)
> at 
> kafka.server.AbstractFetcherManager$$anonfun$addFetcherForPartitions$2.apply(AbstractFetcherManager.scala:83)
> at 
> kafka.server.AbstractFetcherManager$$anonfun$addFetcherForPartitions$2.apply(AbstractFetcherManager.scala:78)
> at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
> at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
> at 
> kafka.server.AbstractFetcherManager.addFetcherForPartitions(AbstractFetcherManager.scala:78)
> at kafka.server.ReplicaManager.makeFollowers(ReplicaManager.scala:791)
> at 
> kafka.server.ReplicaManager.becomeLeaderOrFollower(ReplicaManager.scala:628)
> at kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:114)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:71)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> at java.lang.Thread.run(Thread.java:745)
> This happens when I'm running kafka-reassign-partitions.sh. As a result in 
> the verify command one of the partition reassignments says "is still in 
> progress" forever.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2731) Kerberos on same host with Kafka does not find server in it's database on Ubuntu

2015-11-04 Thread Mohammad Abbasi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14989467#comment-14989467
 ] 

Mohammad Abbasi commented on KAFKA-2731:


I configured Zookeeper too with a new JAAS config file and now everything works 
good!
I read docs/security.html, it's complete, but because it was my first contact 
with Kerberos and I was new to it's concepts, I didn't know that I must set a 
server part for Zookeeper's JAAS configuration too.
Thank you again.

> Kerberos on same host with Kafka does not find server in it's database on 
> Ubuntu
> 
>
> Key: KAFKA-2731
> URL: https://issues.apache.org/jira/browse/KAFKA-2731
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Mohammad Abbasi
>
> Configuring Kafka to use keytab created in Kerberos, as it's said in 
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=61326390,
> Kerberos logs:
> Nov 02 17:25:13 myhost krb5kdc[3307](info): TGS_REQ (5 etypes {17 16 23 1 3}) 
> 192.168.18.241: LOOKING_UP_SERVER: authtime 0,  kafka/myh...@a.org for 
> , Server not found in Kerberos database
> Kafka's log:
> SASL Connection info:
> [2015-11-03 18:33:00,544] DEBUG creating sasl client: 
> client=kafka/myh...@a.org;service=zookeeper;serviceHostname=myhost 
> (org.apache.zookeeper.client.ZooKeeperSaslClient)
> and error:
> [2015-11-03 18:33:00,607] ERROR An error: 
> (java.security.PrivilegedActionException: javax.security.sasl.SaslException: 
> GSS initiate failed [Caused by GSSException: No valid credentials provided 
> (Mechanism level: Server not found in Kerberos database (7) - 
> LOOKING_UP_SERVER)]) occurred when evaluating Zookeeper Quorum Member's  
> received SASL token. Zookeeper Client will go to AUTH_FAILED state. 
> (org.apache.zookeeper.client.ZooKeeperSaslClient)
> [2015-11-03 18:33:00,607] ERROR SASL authentication with Zookeeper Quorum 
> member failed: javax.security.sasl.SaslException: An error: 
> (java.security.PrivilegedActionException: javax.security.sasl.SaslException: 
> GSS initiate failed [Caused by GSSException: No valid credentials provided 
> (Mechanism level: Server not found in Kerberos database (7) - 
> LOOKING_UP_SERVER)]) occurred when evaluating Zookeeper Quorum Member's  
> received SASL token. Zookeeper Client will go to AUTH_FAILED state. 
> (org.apache.zookeeper.ClientCnxn)
> Kerberos works ok in kinit and kvno with the keytab.
> Some people said it's DNS or /etc/hosts problem, but nslookup was ok with ip 
> and hostname
> and /etc/hosts is: 
> 127.0.0.1   myhost localhost
> # The following lines are desirable for IPv6 capable hosts
> ::1 ip6-localhost ip6-loopback
> fe00::0 ip6-localnet
> ff00::0 ip6-mcastprefix
> ff02::1 ip6-allnodes
> ff02::2 ip6-allrouters
> I tested it with the host's ip too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2731) Kerberos on same host with Kafka does not find server in it's database on Ubuntu

2015-11-04 Thread Flavio Junqueira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flavio Junqueira resolved KAFKA-2731.
-
Resolution: Not A Problem

No worries, thanks for checking [~mabbasi90.class].

> Kerberos on same host with Kafka does not find server in it's database on 
> Ubuntu
> 
>
> Key: KAFKA-2731
> URL: https://issues.apache.org/jira/browse/KAFKA-2731
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Mohammad Abbasi
>
> Configuring Kafka to use keytab created in Kerberos, as it's said in 
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=61326390,
> Kerberos logs:
> Nov 02 17:25:13 myhost krb5kdc[3307](info): TGS_REQ (5 etypes {17 16 23 1 3}) 
> 192.168.18.241: LOOKING_UP_SERVER: authtime 0,  kafka/myh...@a.org for 
> , Server not found in Kerberos database
> Kafka's log:
> SASL Connection info:
> [2015-11-03 18:33:00,544] DEBUG creating sasl client: 
> client=kafka/myh...@a.org;service=zookeeper;serviceHostname=myhost 
> (org.apache.zookeeper.client.ZooKeeperSaslClient)
> and error:
> [2015-11-03 18:33:00,607] ERROR An error: 
> (java.security.PrivilegedActionException: javax.security.sasl.SaslException: 
> GSS initiate failed [Caused by GSSException: No valid credentials provided 
> (Mechanism level: Server not found in Kerberos database (7) - 
> LOOKING_UP_SERVER)]) occurred when evaluating Zookeeper Quorum Member's  
> received SASL token. Zookeeper Client will go to AUTH_FAILED state. 
> (org.apache.zookeeper.client.ZooKeeperSaslClient)
> [2015-11-03 18:33:00,607] ERROR SASL authentication with Zookeeper Quorum 
> member failed: javax.security.sasl.SaslException: An error: 
> (java.security.PrivilegedActionException: javax.security.sasl.SaslException: 
> GSS initiate failed [Caused by GSSException: No valid credentials provided 
> (Mechanism level: Server not found in Kerberos database (7) - 
> LOOKING_UP_SERVER)]) occurred when evaluating Zookeeper Quorum Member's  
> received SASL token. Zookeeper Client will go to AUTH_FAILED state. 
> (org.apache.zookeeper.ClientCnxn)
> Kerberos works ok in kinit and kvno with the keytab.
> Some people said it's DNS or /etc/hosts problem, but nslookup was ok with ip 
> and hostname
> and /etc/hosts is: 
> 127.0.0.1   myhost localhost
> # The following lines are desirable for IPv6 capable hosts
> ::1 ip6-localhost ip6-loopback
> fe00::0 ip6-localnet
> ff00::0 ip6-mcastprefix
> ff02::1 ip6-allnodes
> ff02::2 ip6-allrouters
> I tested it with the host's ip too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: One more Kafka Meetup hosted by LinkedIn in 2015 (this time in San Francisco) - does anyone want to talk?

2015-11-04 Thread Joe Stein
They should all be on the user groups section of the confluence page
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+papers+and+presentations
for which there were video. It might need some curating but that is where
it has been going so far.

~ Joe Stein

On Tue, Nov 3, 2015 at 4:48 PM, Grant Henke  wrote:

> Is there a place where we can find all previously streamed/recorded
> meetups?
>
> Thank you,
> Grant
>
> On Tue, Nov 3, 2015 at 2:07 PM, Ed Yakabosky 
> wrote:
>
> > I'm sorry to hear that Lukas.  I have heard that people are starting to
> do
> > carpools via rydeful.com for some of these meetups.
> >
> > Additionally, we will live stream and record the presentations, so you
> can
> > participate remotely.
> >
> > Ed
> >
> > On Tue, Nov 3, 2015 at 10:43 AM, Lukas Steiblys 
> > wrote:
> >
> > > This is sad news. I was looking forward to finally going to a Kafka or
> > > Samza meetup. Going to Mountain View for a meetup is just unrealistic
> > with
> > > 2h travel time each way.
> > >
> > > Lukas
> > >
> > > -Original Message- From: Ed Yakabosky
> > > Sent: Tuesday, November 3, 2015 10:36 AM
> > > To: us...@kafka.apache.org ; dev@kafka.apache.org ; Clark Haskins
> > > Subject: Re: One more Kafka Meetup hosted by LinkedIn in 2015 (this
> time
> > > in San Francisco) - does anyone want to talk?
> > >
> > > Hi all,
> > >
> > > Two corrections to the invite:
> > >
> > >   1. The invitation is for November 18, 2015.  *NOT 2016.*  I was a
> > little
> > >   hasty...
> > >   2. LinkedIn has finished remodeling our broadcast room, so we are
> going
> > >
> > >   to host the meet up in Mountain View, not San Francisco.
> > >
> > > We've arranged for speakers from HortonWorks to talk about Security and
> > > LinkedIn to talk about Quotas.  We are still looking for one more
> > speaker,
> > > so please let me know if you are interested.
> > >
> > > Thanks!
> > > Ed
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > On Fri, Oct 30, 2015 at 12:49 PM, Ed Yakabosky <
> eyakabo...@linkedin.com>
> > > wrote:
> > >
> > > Hi all,
> > >>
> > >> LinkedIn is hoping to host one more Apache Kafka meetup this year on
> > >> November 18 in our San Francisco office.  We're working on building
> the
> > >> agenda now.  Does anyone want to talk?  Please send me (and Clark) a
> > >> private email with a short description of what you would be talking
> > about
> > >> if interested.
> > >>
> > >> --
> > >> Thanks,
> > >>
> > >> Ed Yakabosky
> > >> ​Technical Program Management @ LinkedIn>
> > >>
> > >>
> > >
> > > --
> > > Thanks,
> > > Ed Yakabosky
> > >
> >
> >
> >
> > --
> > Thanks,
> > Ed Yakabosky
> >
>
>
>
> --
> Grant Henke
> Software Engineer | Cloudera
> gr...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke
>


[GitHub] kafka pull request: KAFKA-2722: Improve ISR change propagation.

2015-11-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/402


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-2722) Improve ISR change propagation

2015-11-04 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2722:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 402
[https://github.com/apache/kafka/pull/402]

> Improve ISR change propagation
> --
>
> Key: KAFKA-2722
> URL: https://issues.apache.org/jira/browse/KAFKA-2722
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Currently the ISR change propagation interval is hard coded to 5 seconds, 
> this might still create a lot of ISR change propagation for a large cluster 
> in cases such as rolling bounce. The patch uses a dynamic propagation 
> interval and fixed a performance bug in IsrChangeNotificationListener on 
> controller.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2722) Improve ISR change propagation

2015-11-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14989554#comment-14989554
 ] 

ASF GitHub Bot commented on KAFKA-2722:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/402


> Improve ISR change propagation
> --
>
> Key: KAFKA-2722
> URL: https://issues.apache.org/jira/browse/KAFKA-2722
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Currently the ISR change propagation interval is hard coded to 5 seconds, 
> this might still create a lot of ISR change propagation for a large cluster 
> in cases such as rolling bounce. The patch uses a dynamic propagation 
> interval and fixed a performance bug in IsrChangeNotificationListener on 
> controller.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #95

2015-11-04 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-2722; Improve ISR change propagation.

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H11 (Ubuntu ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 70a7d5786cc3f04b5f3d964eb1fd1d826e9b9e0f 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 70a7d5786cc3f04b5f3d964eb1fd1d826e9b9e0f
 > git rev-list 98db5ea94fcf7600137b5072453705c2a62e1f54 # timeout=10
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson1745908801999749061.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.1/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:downloadWrapper UP-TO-DATE

BUILD SUCCESSFUL

Total time: 10.778 secs
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson8851117522331337069.sh
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll docsJarAll 
testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.8/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:contrib:clean UP-TO-DATE
:copycat:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:contrib:hadoop-consumer:clean UP-TO-DATE
:contrib:hadoop-producer:clean UP-TO-DATE
:copycat:api:clean UP-TO-DATE
:copycat:file:clean UP-TO-DATE
:copycat:json:clean UP-TO-DATE
:copycat:runtime:clean UP-TO-DATE
:jar_core_2_10_5
Building project 'core' with Scala version 2.10.5
:kafka-trunk-jdk8:clients:compileJava
:jar_core_2_10_5 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 10.901 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45


Build failed in Jenkins: kafka-trunk-jdk7 #755

2015-11-04 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-2722; Improve ISR change propagation.

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H11 (Ubuntu ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 70a7d5786cc3f04b5f3d964eb1fd1d826e9b9e0f 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 70a7d5786cc3f04b5f3d964eb1fd1d826e9b9e0f
 > git rev-list 98db5ea94fcf7600137b5072453705c2a62e1f54 # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson4790101061604007061.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:downloadWrapper UP-TO-DATE

BUILD SUCCESSFUL

Total time: 12.837 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson703082196164120070.sh
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll docsJarAll 
testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.8/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:contrib:clean UP-TO-DATE
:copycat:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:contrib:hadoop-consumer:clean UP-TO-DATE
:contrib:hadoop-producer:clean UP-TO-DATE
:copycat:api:clean UP-TO-DATE
:copycat:file:clean UP-TO-DATE
:copycat:json:clean UP-TO-DATE
:copycat:runtime:clean UP-TO-DATE
:jar_core_2_10_5
Building project 'core' with Scala version 2.10.5
:kafka-trunk-jdk7:clients:compileJava
:jar_core_2_10_5 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 14.755 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51


[jira] [Updated] (KAFKA-2740) Convert Windows bin scripts from CRLF to LF line encodings

2015-11-04 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-2740:
-
   Resolution: Fixed
Fix Version/s: 0.9.0.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 419
[https://github.com/apache/kafka/pull/419]

> Convert Windows bin scripts from CRLF to LF line encodings
> --
>
> Key: KAFKA-2740
> URL: https://issues.apache.org/jira/browse/KAFKA-2740
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.8.2.1
>Reporter: Michael Noll
>Priority: Minor
> Fix For: 0.9.0.0
>
>
> We currently have an inconsistency with regards to line encodings of the 
> Windows bin/ scripts.  All but three Windows scripts use the same line 
> encodings as we do for the rest of the code (LF aka Unix style).
> The three Windows scripts that use CRLF encoding (aka Windows style) are:
> * kafka-run-class.bat
> * kafka-server-stop.bat
> * zookeeper-server-stop.bat
> I'd suggest we convert the three scripts above from CRLF to LF.  This will 
> not only restore consistency, it will also (and more importantly) resolve 
> woes caused by line encoding differences when diffing, patching, etc. these 
> files/code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2740: Convert Windows bin scripts from C...

2015-11-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/419


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2740) Convert Windows bin scripts from CRLF to LF line encodings

2015-11-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14989946#comment-14989946
 ] 

ASF GitHub Bot commented on KAFKA-2740:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/419


> Convert Windows bin scripts from CRLF to LF line encodings
> --
>
> Key: KAFKA-2740
> URL: https://issues.apache.org/jira/browse/KAFKA-2740
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.8.2.1
>Reporter: Michael Noll
>Priority: Minor
> Fix For: 0.9.0.0
>
>
> We currently have an inconsistency with regards to line encodings of the 
> Windows bin/ scripts.  All but three Windows scripts use the same line 
> encodings as we do for the rest of the code (LF aka Unix style).
> The three Windows scripts that use CRLF encoding (aka Windows style) are:
> * kafka-run-class.bat
> * kafka-server-stop.bat
> * zookeeper-server-stop.bat
> I'd suggest we convert the three scripts above from CRLF to LF.  This will 
> not only restore consistency, it will also (and more importantly) resolve 
> woes caused by line encoding differences when diffing, patching, etc. these 
> files/code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #756

2015-11-04 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2740: Convert Windows bin scripts from CRLF to LF line 
encodings

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H11 (Ubuntu ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 7ded64bc2edf8645d7fd113633a56791bf1e8e4a 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 7ded64bc2edf8645d7fd113633a56791bf1e8e4a
 > git rev-list 70a7d5786cc3f04b5f3d964eb1fd1d826e9b9e0f # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson898765191827274.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:downloadWrapper UP-TO-DATE

BUILD SUCCESSFUL

Total time: 12.033 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson8136075842187995279.sh
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll docsJarAll 
testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.8/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:contrib:clean UP-TO-DATE
:copycat:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:contrib:hadoop-consumer:clean UP-TO-DATE
:contrib:hadoop-producer:clean UP-TO-DATE
:copycat:api:clean UP-TO-DATE
:copycat:file:clean UP-TO-DATE
:copycat:json:clean UP-TO-DATE
:copycat:runtime:clean UP-TO-DATE
:jar_core_2_10_5
Building project 'core' with Scala version 2.10.5
:kafka-trunk-jdk7:clients:compileJava
:jar_core_2_10_5 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 13.42 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51


[jira] [Commented] (KAFKA-2730) partition-reassignment tool stops working due to error in registerMetric

2015-11-04 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14990011#comment-14990011
 ] 

Guozhang Wang commented on KAFKA-2730:
--

Great to hear that, will merge the patch to trunk to be include in the 0.9.0 
release.

Thanks for reporting.

> partition-reassignment tool stops working due to error in registerMetric
> 
>
> Key: KAFKA-2730
> URL: https://issues.apache.org/jira/browse/KAFKA-2730
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Jun Rao
>Assignee: Guozhang Wang
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> I updated our test system to use Kafka from latest revision 
> 7c33475274cb6e65a8e8d907e7fef6e56bc8c8e6 and now I'm seeing:
> [2015-11-03 14:07:01,554] ERROR [KafkaApi-2] error when handling request 
> Name:LeaderAndIsrRequest;Version:0;Controller:3;ControllerEpoch:1;CorrelationId:5;ClientId:3;Leaders:BrokerEndPoint(3,192.168.60.168,21769);PartitionState:(5c700e33-9230-4219-a3e1-42574c175d62-logs,0)
>  -> 
> (LeaderAndIsrInfo:(Leader:3,ISR:3,LeaderEpoch:1,ControllerEpoch:1),ReplicationFactor:3),AllReplicas:2,3,1)
>  (kafka.server.KafkaApis)
> java.lang.IllegalArgumentException: A metric named 'MetricName 
> [name=connection-close-rate, group=replica-fetcher-metrics, 
> description=Connections closed per second in the window., 
> tags={broker-id=3}]' already exists, can't register another one.
> at org.apache.kafka.common.metrics.Metrics.registerMetric(Metrics.java:285)
> at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:177)
> at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:162)
> at 
> org.apache.kafka.common.network.Selector$SelectorMetrics.(Selector.java:578)
> at org.apache.kafka.common.network.Selector.(Selector.java:112)
> at kafka.server.ReplicaFetcherThread.(ReplicaFetcherThread.scala:69)
> at 
> kafka.server.ReplicaFetcherManager.createFetcherThread(ReplicaFetcherManager.scala:35)
> at 
> kafka.server.AbstractFetcherManager$$anonfun$addFetcherForPartitions$2.apply(AbstractFetcherManager.scala:83)
> at 
> kafka.server.AbstractFetcherManager$$anonfun$addFetcherForPartitions$2.apply(AbstractFetcherManager.scala:78)
> at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
> at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
> at 
> kafka.server.AbstractFetcherManager.addFetcherForPartitions(AbstractFetcherManager.scala:78)
> at kafka.server.ReplicaManager.makeFollowers(ReplicaManager.scala:791)
> at 
> kafka.server.ReplicaManager.becomeLeaderOrFollower(ReplicaManager.scala:628)
> at kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:114)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:71)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> at java.lang.Thread.run(Thread.java:745)
> This happens when I'm running kafka-reassign-partitions.sh. As a result in 
> the verify command one of the partition reassignments says "is still in 
> progress" forever.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2727) initialize only the part of the topology relevant to the task

2015-11-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14990032#comment-14990032
 ] 

ASF GitHub Bot commented on KAFKA-2727:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/411


> initialize only the part of the topology relevant to the task
> -
>
> Key: KAFKA-2727
> URL: https://issues.apache.org/jira/browse/KAFKA-2727
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Affects Versions: 0.9.0.0
>Reporter: Yasuhiro Matsuda
>Assignee: Yasuhiro Matsuda
>
> Currently each streaming task initializes the entire topology regardless of 
> the assigned topic-partitions. This is wasteful especially when the topology 
> has local state stores. All local state stores are restored from their change 
> log topics even when are not actually used in the task execution. To fix 
> this, the task initialization should be aware of the relevant subgraph of the 
> topology and initializes only processors and state stores in the subgraph.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2727: Topology partial construction

2015-11-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/411


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-2727) initialize only the part of the topology relevant to the task

2015-11-04 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-2727.
--
   Resolution: Fixed
Fix Version/s: 0.9.0.0

Issue resolved by pull request 411
[https://github.com/apache/kafka/pull/411]

> initialize only the part of the topology relevant to the task
> -
>
> Key: KAFKA-2727
> URL: https://issues.apache.org/jira/browse/KAFKA-2727
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Affects Versions: 0.9.0.0
>Reporter: Yasuhiro Matsuda
>Assignee: Yasuhiro Matsuda
> Fix For: 0.9.0.0
>
>
> Currently each streaming task initializes the entire topology regardless of 
> the assigned topic-partitions. This is wasteful especially when the topology 
> has local state stores. All local state stores are restored from their change 
> log topics even when are not actually used in the task execution. To fix 
> this, the task initialization should be aware of the relevant subgraph of the 
> topology and initializes only processors and state stores in the subgraph.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2737: Added single- and multi-consumer i...

2015-11-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/413


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-2737) Integration tests for round-robin assignment

2015-11-04 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-2737.
--
   Resolution: Fixed
Fix Version/s: 0.9.0.0

Issue resolved by pull request 413
[https://github.com/apache/kafka/pull/413]

> Integration tests for round-robin assignment
> 
>
> Key: KAFKA-2737
> URL: https://issues.apache.org/jira/browse/KAFKA-2737
> Project: Kafka
>  Issue Type: Test
>Reporter: Anna Povzner
>Assignee: Anna Povzner
> Fix For: 0.9.0.0
>
>
> We currently don't have integration tests which use round-robin assignment. 
> This card is to add basic integration tests with round-robin assignment for 
> both single-consumer and multi-consumer cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2737) Integration tests for round-robin assignment

2015-11-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14990055#comment-14990055
 ] 

ASF GitHub Bot commented on KAFKA-2737:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/413


> Integration tests for round-robin assignment
> 
>
> Key: KAFKA-2737
> URL: https://issues.apache.org/jira/browse/KAFKA-2737
> Project: Kafka
>  Issue Type: Test
>Reporter: Anna Povzner
>Assignee: Anna Povzner
> Fix For: 0.9.0.0
>
>
> We currently don't have integration tests which use round-robin assignment. 
> This card is to add basic integration tests with round-robin assignment for 
> both single-consumer and multi-consumer cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: One more Kafka Meetup hosted by LinkedIn in 2015 (this time in San Francisco) - does anyone want to talk?

2015-11-04 Thread Ed Yakabosky
Thanks Joe.  +1 on curating the page.

In the meantime, you will also find some old videos and slides in the
comments of past meet ups: http://www.meetup.com/http-kafka-apache-org/.

On Wed, Nov 4, 2015 at 4:40 AM, Joe Stein  wrote:

> They should all be on the user groups section of the confluence page
>
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+papers+and+presentations
> for which there were video. It might need some curating but that is where
> it has been going so far.
>
> ~ Joe Stein
>
> On Tue, Nov 3, 2015 at 4:48 PM, Grant Henke  wrote:
>
> > Is there a place where we can find all previously streamed/recorded
> > meetups?
> >
> > Thank you,
> > Grant
> >
> > On Tue, Nov 3, 2015 at 2:07 PM, Ed Yakabosky 
> > wrote:
> >
> > > I'm sorry to hear that Lukas.  I have heard that people are starting to
> > do
> > > carpools via rydeful.com for some of these meetups.
> > >
> > > Additionally, we will live stream and record the presentations, so you
> > can
> > > participate remotely.
> > >
> > > Ed
> > >
> > > On Tue, Nov 3, 2015 at 10:43 AM, Lukas Steiblys 
> > > wrote:
> > >
> > > > This is sad news. I was looking forward to finally going to a Kafka
> or
> > > > Samza meetup. Going to Mountain View for a meetup is just unrealistic
> > > with
> > > > 2h travel time each way.
> > > >
> > > > Lukas
> > > >
> > > > -Original Message- From: Ed Yakabosky
> > > > Sent: Tuesday, November 3, 2015 10:36 AM
> > > > To: us...@kafka.apache.org ; dev@kafka.apache.org ; Clark Haskins
> > > > Subject: Re: One more Kafka Meetup hosted by LinkedIn in 2015 (this
> > time
> > > > in San Francisco) - does anyone want to talk?
> > > >
> > > > Hi all,
> > > >
> > > > Two corrections to the invite:
> > > >
> > > >   1. The invitation is for November 18, 2015.  *NOT 2016.*  I was a
> > > little
> > > >   hasty...
> > > >   2. LinkedIn has finished remodeling our broadcast room, so we are
> > going
> > > >
> > > >   to host the meet up in Mountain View, not San Francisco.
> > > >
> > > > We've arranged for speakers from HortonWorks to talk about Security
> and
> > > > LinkedIn to talk about Quotas.  We are still looking for one more
> > > speaker,
> > > > so please let me know if you are interested.
> > > >
> > > > Thanks!
> > > > Ed
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > On Fri, Oct 30, 2015 at 12:49 PM, Ed Yakabosky <
> > eyakabo...@linkedin.com>
> > > > wrote:
> > > >
> > > > Hi all,
> > > >>
> > > >> LinkedIn is hoping to host one more Apache Kafka meetup this year on
> > > >> November 18 in our San Francisco office.  We're working on building
> > the
> > > >> agenda now.  Does anyone want to talk?  Please send me (and Clark) a
> > > >> private email with a short description of what you would be talking
> > > about
> > > >> if interested.
> > > >>
> > > >> --
> > > >> Thanks,
> > > >>
> > > >> Ed Yakabosky
> > > >> ​Technical Program Management @ LinkedIn>
> > > >>
> > > >>
> > > >
> > > > --
> > > > Thanks,
> > > > Ed Yakabosky
> > > >
> > >
> > >
> > >
> > > --
> > > Thanks,
> > > Ed Yakabosky
> > >
> >
> >
> >
> > --
> > Grant Henke
> > Software Engineer | Cloudera
> > gr...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke
> >
>



-- 
Thanks,
Ed Yakabosky


[GitHub] kafka pull request: KAFKA-2730: use thread-id as metrics tags

2015-11-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/416


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-2730) partition-reassignment tool stops working due to error in registerMetric

2015-11-04 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira resolved KAFKA-2730.
-
Resolution: Fixed

Issue resolved by pull request 416
[https://github.com/apache/kafka/pull/416]

> partition-reassignment tool stops working due to error in registerMetric
> 
>
> Key: KAFKA-2730
> URL: https://issues.apache.org/jira/browse/KAFKA-2730
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Jun Rao
>Assignee: Guozhang Wang
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> I updated our test system to use Kafka from latest revision 
> 7c33475274cb6e65a8e8d907e7fef6e56bc8c8e6 and now I'm seeing:
> [2015-11-03 14:07:01,554] ERROR [KafkaApi-2] error when handling request 
> Name:LeaderAndIsrRequest;Version:0;Controller:3;ControllerEpoch:1;CorrelationId:5;ClientId:3;Leaders:BrokerEndPoint(3,192.168.60.168,21769);PartitionState:(5c700e33-9230-4219-a3e1-42574c175d62-logs,0)
>  -> 
> (LeaderAndIsrInfo:(Leader:3,ISR:3,LeaderEpoch:1,ControllerEpoch:1),ReplicationFactor:3),AllReplicas:2,3,1)
>  (kafka.server.KafkaApis)
> java.lang.IllegalArgumentException: A metric named 'MetricName 
> [name=connection-close-rate, group=replica-fetcher-metrics, 
> description=Connections closed per second in the window., 
> tags={broker-id=3}]' already exists, can't register another one.
> at org.apache.kafka.common.metrics.Metrics.registerMetric(Metrics.java:285)
> at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:177)
> at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:162)
> at 
> org.apache.kafka.common.network.Selector$SelectorMetrics.(Selector.java:578)
> at org.apache.kafka.common.network.Selector.(Selector.java:112)
> at kafka.server.ReplicaFetcherThread.(ReplicaFetcherThread.scala:69)
> at 
> kafka.server.ReplicaFetcherManager.createFetcherThread(ReplicaFetcherManager.scala:35)
> at 
> kafka.server.AbstractFetcherManager$$anonfun$addFetcherForPartitions$2.apply(AbstractFetcherManager.scala:83)
> at 
> kafka.server.AbstractFetcherManager$$anonfun$addFetcherForPartitions$2.apply(AbstractFetcherManager.scala:78)
> at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
> at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
> at 
> kafka.server.AbstractFetcherManager.addFetcherForPartitions(AbstractFetcherManager.scala:78)
> at kafka.server.ReplicaManager.makeFollowers(ReplicaManager.scala:791)
> at 
> kafka.server.ReplicaManager.becomeLeaderOrFollower(ReplicaManager.scala:628)
> at kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:114)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:71)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> at java.lang.Thread.run(Thread.java:745)
> This happens when I'm running kafka-reassign-partitions.sh. As a result in 
> the verify command one of the partition reassignments says "is still in 
> progress" forever.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2730) partition-reassignment tool stops working due to error in registerMetric

2015-11-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14990086#comment-14990086
 ] 

ASF GitHub Bot commented on KAFKA-2730:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/416


> partition-reassignment tool stops working due to error in registerMetric
> 
>
> Key: KAFKA-2730
> URL: https://issues.apache.org/jira/browse/KAFKA-2730
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Jun Rao
>Assignee: Guozhang Wang
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> I updated our test system to use Kafka from latest revision 
> 7c33475274cb6e65a8e8d907e7fef6e56bc8c8e6 and now I'm seeing:
> [2015-11-03 14:07:01,554] ERROR [KafkaApi-2] error when handling request 
> Name:LeaderAndIsrRequest;Version:0;Controller:3;ControllerEpoch:1;CorrelationId:5;ClientId:3;Leaders:BrokerEndPoint(3,192.168.60.168,21769);PartitionState:(5c700e33-9230-4219-a3e1-42574c175d62-logs,0)
>  -> 
> (LeaderAndIsrInfo:(Leader:3,ISR:3,LeaderEpoch:1,ControllerEpoch:1),ReplicationFactor:3),AllReplicas:2,3,1)
>  (kafka.server.KafkaApis)
> java.lang.IllegalArgumentException: A metric named 'MetricName 
> [name=connection-close-rate, group=replica-fetcher-metrics, 
> description=Connections closed per second in the window., 
> tags={broker-id=3}]' already exists, can't register another one.
> at org.apache.kafka.common.metrics.Metrics.registerMetric(Metrics.java:285)
> at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:177)
> at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:162)
> at 
> org.apache.kafka.common.network.Selector$SelectorMetrics.(Selector.java:578)
> at org.apache.kafka.common.network.Selector.(Selector.java:112)
> at kafka.server.ReplicaFetcherThread.(ReplicaFetcherThread.scala:69)
> at 
> kafka.server.ReplicaFetcherManager.createFetcherThread(ReplicaFetcherManager.scala:35)
> at 
> kafka.server.AbstractFetcherManager$$anonfun$addFetcherForPartitions$2.apply(AbstractFetcherManager.scala:83)
> at 
> kafka.server.AbstractFetcherManager$$anonfun$addFetcherForPartitions$2.apply(AbstractFetcherManager.scala:78)
> at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
> at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
> at 
> kafka.server.AbstractFetcherManager.addFetcherForPartitions(AbstractFetcherManager.scala:78)
> at kafka.server.ReplicaManager.makeFollowers(ReplicaManager.scala:791)
> at 
> kafka.server.ReplicaManager.becomeLeaderOrFollower(ReplicaManager.scala:628)
> at kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:114)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:71)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> at java.lang.Thread.run(Thread.java:745)
> This happens when I'm running kafka-reassign-partitions.sh. As a result in 
> the verify command one of the partition reassignments says "is still in 
> progress" forever.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Authorization Engine For Kafka Related to KPI-11

2015-11-04 Thread Don Bosco Durai
Bhavesh

I am from the Apache Ranger team, so let me answer the questions pertaining to 
Ranger…

> 1) Is there any performance impact with Brokers/Producer/Consumer while using 
> Apache Ranger ?
As such Ranger overhead is negligible. Specifically for Kafka, we did some more 
optimization for the unique usage pattern of Kafka consumers and publishers. 
Considering Kafka itself is optimized for super throughput, expected some 
overhead penalty. We don’t have the exact numbers yet. Would you be willing to 
help us some run some real-world scenarios to determine the overhead? If so, we 
can discuss it in the Ranger mailing list. Thanks.

> 2) Is Audit log really useful out-of-box ? or let me know what sort of 
> reports you run on audit logs (e.g pumping Apache Ranger audit log into any 
> other system for reporting purpose).
If you are referring to Ranger audit logs, then it is configurable. OOTB, it 
has options to go to HDFS, SOLR and/or DB. We have extensions to send it to 
Kafka and any Log4J appender. Audit logs has its own purpose, from compliance 
requirements to operation monitoring usage to anomaly detection. Ranger portal 
has its own UI for querying. Since the audits are stored in normalized format, 
you can write your own reports on top of it.

Bosco





On 11/3/15, 9:55 PM, "Bhavesh Mistry"  wrote:

>+ Kafka Dev team to see if Kafka Dev team know or recommend any Auth
>engine for Producers/Consumers.
>
>Thanks,
>
>Bhavesh
>
>Please pardon me,  I accidentally send previous blank email.
>
>On Tue, Nov 3, 2015 at 9:52 PM, Bhavesh Mistry
> wrote:
>> On Sun, Nov 1, 2015 at 11:15 PM, Bhavesh Mistry
>>  wrote:
>>> HI All,
>>>
>>> Have any one used Apache Ranger as Authorization Engine for Kafka Topic
>>> creation, consumption (read) and  write operation on a topic.  I am looking
>>> at having audit log and regulating consumption/ write to particular topic
>>> (for example, having production environment access does not mean that anyone
>>> can run console consumer etc on particular topic. Basically, regulate who
>>> can read/write to a topic as first use case).
>>>
>>> https://cwiki.apache.org/confluence/display/RANGER/Apache+Ranger+0.5+-+User+Guide#ApacheRanger0.5-UserGuide-KAFKA
>>>
>>> If you have used Apache Ranger in production, I have following question:
>>> 1) Is there any performance impact with Brokers/Producer/Consumer while
>>> using Apache Ranger ?
>>> 2) Is Audit log really useful out-of-box ? or let me know what sort of
>>> reports you run on audit logs (e.g pumping Apache Ranger audit log into any
>>> other system for reporting purpose).
>>>
>>> Please share your experience using Kafka with any other Authorization engine
>>> if you are not using Apache Ranger (This is based on
>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-11+-+Authorization+Interface).
>>>
>>> Thanks and looking forward to hear back from Apache Kafka Community members.
>>>
>>> Thanks,
>>>
>>> Bhavesh



Build failed in Jenkins: kafka-trunk-jdk8 #96

2015-11-04 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2740: Convert Windows bin scripts from CRLF to LF line 
encodings

[wangguoz] KAFKA-2727: Topology partial construction

[wangguoz] KAFKA-2737: Added single- and multi-consumer integration tests for

[cshapi] KAFKA-2730: use thread-id as metrics tags

--
[...truncated 391 lines...]
[ant:scaladoc] ^
[ant:scaladoc] 
:294:
 warning: a pure expression does nothing in statement position; you may be 
omitting necessary parentheses
[ant:scaladoc] ControllerStats.leaderElectionTimer
[ant:scaladoc] ^
[ant:scaladoc] warning: there were 15 feature warning(s); re-run with -feature 
for details
[ant:scaladoc] 
:72:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#offer".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:32:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#offer".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:137:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#poll".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:120:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#poll".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:97:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#put".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:152:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#take".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 9 warnings found
:kafka-trunk-jdk8:core:scaladocJar
:kafka-trunk-jdk8:core:docsJar
:docsJar_2_11_7
Building project 'core' with Scala version 2.11.7
:kafka-trunk-jdk8:clients:compileJava UP-TO-DATE
:kafka-trunk-jdk8:clients:processResources UP-TO-DATE
:kafka-trunk-jdk8:clients:classes UP-TO-DATE
:kafka-trunk-jdk8:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk8:clients:createVersionFile
:kafka-trunk-jdk8:clients:jar UP-TO-DATE
:kafka-trunk-jdk8:clients:javadoc UP-TO-DATE
:kafka-trunk-jdk8:core:compileJava UP-TO-DATE
:kafka-trunk-jdk8:core:compileScalaJava HotSpot(TM) 64-Bit Server VM warning: 
ignoring option MaxPermSize=512m; support was removed in 8.0

:78:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

  ^
:37:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 expireTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP) {

  ^
:392:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
  if (value.expireTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

^


[jira] [Updated] (KAFKA-2736) ZkClient doesn't handle SaslAuthenticated

2015-11-04 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2736:

Fix Version/s: 0.9.0.0

> ZkClient doesn't handle SaslAuthenticated
> -
>
> Key: KAFKA-2736
> URL: https://issues.apache.org/jira/browse/KAFKA-2736
> Project: Kafka
>  Issue Type: Bug
>  Components: zkclient
>Affects Versions: 0.9.0.0
>Reporter: Flavio Junqueira
>Assignee: Flavio Junqueira
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> See https://github.com/sgroschupf/zkclient/issues/38



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2739) Bug in ZKClient may cause failure to start brokers

2015-11-04 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira resolved KAFKA-2739.
-
Resolution: Duplicate

> Bug in ZKClient may cause failure to start brokers
> --
>
> Key: KAFKA-2739
> URL: https://issues.apache.org/jira/browse/KAFKA-2739
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>
> Described by [~fpj] here:
> https://github.com/sgroschupf/zkclient/issues/38
> This is an ZKClient issue. I'm opening this JIRA so we can track the error 
> and upgrade to the new ZKClient when this is resolved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2736) ZkClient doesn't handle SaslAuthenticated

2015-11-04 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2736:

Priority: Blocker  (was: Major)

> ZkClient doesn't handle SaslAuthenticated
> -
>
> Key: KAFKA-2736
> URL: https://issues.apache.org/jira/browse/KAFKA-2736
> Project: Kafka
>  Issue Type: Bug
>  Components: zkclient
>Affects Versions: 0.9.0.0
>Reporter: Flavio Junqueira
>Assignee: Flavio Junqueira
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> See https://github.com/sgroschupf/zkclient/issues/38



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2736) ZkClient doesn't handle SaslAuthenticated

2015-11-04 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2736:

Assignee: Flavio Junqueira

> ZkClient doesn't handle SaslAuthenticated
> -
>
> Key: KAFKA-2736
> URL: https://issues.apache.org/jira/browse/KAFKA-2736
> Project: Kafka
>  Issue Type: Bug
>  Components: zkclient
>Affects Versions: 0.9.0.0
>Reporter: Flavio Junqueira
>Assignee: Flavio Junqueira
> Fix For: 0.9.0.0
>
>
> See https://github.com/sgroschupf/zkclient/issues/38



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2691) Improve handling of authorization failure during metadata refresh

2015-11-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14990190#comment-14990190
 ] 

ASF GitHub Bot commented on KAFKA-2691:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/394


> Improve handling of authorization failure during metadata refresh
> -
>
> Key: KAFKA-2691
> URL: https://issues.apache.org/jira/browse/KAFKA-2691
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> There are two problems, one more severe than the other:
> 1. The consumer blocks indefinitely if there is non-transient authorization 
> failure during metadata refresh due to KAFKA-2391
> 2. We get a TimeoutException instead of an AuthorizationException in the 
> producer for the same case
> If the fix for KAFKA-2391 is to add a timeout, then we will have issue `2` in 
> both producer and consumer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2691: Improve handling of authorization ...

2015-11-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/394


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-2691) Improve handling of authorization failure during metadata refresh

2015-11-04 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira resolved KAFKA-2691.
-
Resolution: Fixed

Issue resolved by pull request 394
[https://github.com/apache/kafka/pull/394]

> Improve handling of authorization failure during metadata refresh
> -
>
> Key: KAFKA-2691
> URL: https://issues.apache.org/jira/browse/KAFKA-2691
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> There are two problems, one more severe than the other:
> 1. The consumer blocks indefinitely if there is non-transient authorization 
> failure during metadata refresh due to KAFKA-2391
> 2. We get a TimeoutException instead of an AuthorizationException in the 
> producer for the same case
> If the fix for KAFKA-2391 is to add a timeout, then we will have issue `2` in 
> both producer and consumer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #757

2015-11-04 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2727: Topology partial construction

[wangguoz] KAFKA-2737: Added single- and multi-consumer integration tests for

[cshapi] KAFKA-2730: use thread-id as metrics tags

[cshapi] KAFKA-2691: Improve handling of authorization failure during metadata

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-5 (docker Ubuntu ubuntu5 ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision c39e79bb5af5b4e56bec358f8ec3758e6822dbcf 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f c39e79bb5af5b4e56bec358f8ec3758e6822dbcf
 > git rev-list 7ded64bc2edf8645d7fd113633a56791bf1e8e4a # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson8661115942565517480.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:downloadWrapper UP-TO-DATE

BUILD SUCCESSFUL

Total time: 16.525 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson356459870365238869.sh
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll docsJarAll 
testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.8/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:clean UP-TO-DATE
:clients:clean
:contrib:clean UP-TO-DATE
:copycat:clean UP-TO-DATE
:core:clean
:examples:clean
:log4j-appender:clean
:streams:clean
:tools:clean
:contrib:hadoop-consumer:clean
:contrib:hadoop-producer:clean
:copycat:api:clean
:copycat:file:clean
:copycat:json:clean
:copycat:runtime:clean
:jar_core_2_10_5
Building project 'core' with Scala version 2.10.5
:kafka-trunk-jdk7:clients:compileJava
:jar_core_2_10_5 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 18.353 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51


[jira] [Created] (KAFKA-2741) Source/SinkTaskContext should be interfaces and implementations kept in runtime jar

2015-11-04 Thread Ewen Cheslack-Postava (JIRA)
Ewen Cheslack-Postava created KAFKA-2741:


 Summary: Source/SinkTaskContext should be interfaces and 
implementations kept in runtime jar
 Key: KAFKA-2741
 URL: https://issues.apache.org/jira/browse/KAFKA-2741
 Project: Kafka
  Issue Type: Bug
  Components: copycat
Reporter: Ewen Cheslack-Postava
Assignee: Ewen Cheslack-Postava
 Fix For: 0.9.0.0


Over time, these got turned into abstract classes, but they should just be 
interfaces with the specific implementation kept entirely in the runtime jar 
and opaque to connectors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2742) SourceTaskOffsetCommitter does not properly remove commit tasks when they are already in progress

2015-11-04 Thread Ewen Cheslack-Postava (JIRA)
Ewen Cheslack-Postava created KAFKA-2742:


 Summary: SourceTaskOffsetCommitter does not properly remove commit 
tasks when they are already in progress
 Key: KAFKA-2742
 URL: https://issues.apache.org/jira/browse/KAFKA-2742
 Project: Kafka
  Issue Type: Bug
  Components: copycat
Reporter: Ewen Cheslack-Postava
Assignee: Ewen Cheslack-Postava
 Fix For: 0.9.0.0


The current implementation is relying on ScheduledExecutorService to cancel the 
task, but this doesn't handle in-progress tasks and can result in stopping 
source tasks not completing a final offset commit before considering the task 
fully stopped. This can allow rebalancing to proceed before offsets are fully 
committed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2743) Forwarding task reconfigurations in Copycat can deadlock with rebalances and has no backoff

2015-11-04 Thread Ewen Cheslack-Postava (JIRA)
Ewen Cheslack-Postava created KAFKA-2743:


 Summary: Forwarding task reconfigurations in Copycat can deadlock 
with rebalances and has no backoff
 Key: KAFKA-2743
 URL: https://issues.apache.org/jira/browse/KAFKA-2743
 Project: Kafka
  Issue Type: Bug
  Components: copycat
Reporter: Ewen Cheslack-Postava
Assignee: Ewen Cheslack-Postava
 Fix For: 0.9.0.0


There are two issues with the way we're currently forwarding task 
reconfigurations. First, the forwarding is performed synchronously in the 
DistributedHerder's main processing loop. If node A forwards a task 
reconfiguration and node B has started a rebalance process, we can end up with 
distributed deadlock because node A will be blocking on the HTTP request in the 
thread that would otherwise handle heartbeating and rebalancing.

Second, currently we just retry aggressively with no backoff. In some cases the 
node that is currently thought to be the leader will legitimately be down (it 
shutdown and the node sending the request didn't rebalance yet), so we need 
some backoff to avoid unnecessarily hammering the network and the huge log 
files that result from constant errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2744) WorkerSourceTask commits offsets too early when stopping

2015-11-04 Thread Ewen Cheslack-Postava (JIRA)
Ewen Cheslack-Postava created KAFKA-2744:


 Summary: WorkerSourceTask commits offsets too early when stopping
 Key: KAFKA-2744
 URL: https://issues.apache.org/jira/browse/KAFKA-2744
 Project: Kafka
  Issue Type: Bug
  Components: copycat
Reporter: Ewen Cheslack-Postava
Assignee: Ewen Cheslack-Postava
 Fix For: 0.9.0.0


The call to commit offsets appears ok at first glance because we've invoked the 
SourceTask's stop method, but it runs before we've stopped the work thread 
which could still be invoking the SourceTask's poll() method and may have 
outstanding data. We need to wait until we're sure the work thread has 
completely finished so we're guaranteed to flush all the data generated by the 
SourceTask and commit the final offsets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2741) Source/SinkTaskContext should be interfaces and implementations kept in runtime jar

2015-11-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14990235#comment-14990235
 ] 

ASF GitHub Bot commented on KAFKA-2741:
---

GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/420

KAFKA-2741: Make SourceTaskContext and SinkTaskContext interfaces and keep 
implementations in runtime jar.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka task-context-interfaces

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/420.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #420


commit bf024989a242f5132b7418a8c163ae4e922d06bd
Author: Ewen Cheslack-Postava 
Date:   2015-11-04T18:53:02Z

KAFKA-2741: Make SourceTaskContext and SinkTaskContext interfaces and keep 
implementations in runtime jar.




> Source/SinkTaskContext should be interfaces and implementations kept in 
> runtime jar
> ---
>
> Key: KAFKA-2741
> URL: https://issues.apache.org/jira/browse/KAFKA-2741
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.0
>
>
> Over time, these got turned into abstract classes, but they should just be 
> interfaces with the specific implementation kept entirely in the runtime jar 
> and opaque to connectors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2742: Fix SourceTaskOffsetCommitter to h...

2015-11-04 Thread ewencp
GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/421

KAFKA-2742: Fix SourceTaskOffsetCommitter to handle removal of commit tasks 
when they are already in progress.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka 
wait-on-in-progress-source-offset-commits

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/421.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #421


commit 023b3be05978ee714955fdfc04809b7e286aaca1
Author: Ewen Cheslack-Postava 
Date:   2015-11-03T05:26:39Z

KAFKA-2742: Fix SourceTaskOffsetCommitter to handle removal of commit tasks 
when they are already in progress.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-2741: Make SourceTaskContext and SinkTas...

2015-11-04 Thread ewencp
GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/420

KAFKA-2741: Make SourceTaskContext and SinkTaskContext interfaces and keep 
implementations in runtime jar.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka task-context-interfaces

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/420.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #420


commit bf024989a242f5132b7418a8c163ae4e922d06bd
Author: Ewen Cheslack-Postava 
Date:   2015-11-04T18:53:02Z

KAFKA-2741: Make SourceTaskContext and SinkTaskContext interfaces and keep 
implementations in runtime jar.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-2743: Make forwarded task reconfiguratio...

2015-11-04 Thread ewencp
GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/422

KAFKA-2743: Make forwarded task reconfiguration requests asynchronous, run 
on a separate thread, and backoff before retrying when they fail.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka 
task-reconfiguration-async-with-backoff

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/422.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #422


commit 8a30a78b9222ed8fec5143a41db5cf8e6e9efbc7
Author: Ewen Cheslack-Postava 
Date:   2015-11-03T05:30:32Z

KAFKA-2743: Make forwarded task reconfiguration requests asynchronous, run 
on a separate thread, and backoff before retrying when they fail.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-2744: Commit source task offsets after t...

2015-11-04 Thread ewencp
GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/423

KAFKA-2744: Commit source task offsets after task is completely stopped to 
ensure no additional messages are processed during the offset commit when 
stopping tasks for rebalancing.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka 
commit-source-offsets-after-work-thread-exits

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/423.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #423


commit 5c44efa57215153d8d8b3ca6ad50a5f878602f79
Author: Ewen Cheslack-Postava 
Date:   2015-11-04T05:42:23Z

KAFKA-2744: Commit source task offsets after task is completely stopped to 
ensure no additional messages are processed during the offset commit when 
stopping tasks for rebalancing.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2743) Forwarding task reconfigurations in Copycat can deadlock with rebalances and has no backoff

2015-11-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14990238#comment-14990238
 ] 

ASF GitHub Bot commented on KAFKA-2743:
---

GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/422

KAFKA-2743: Make forwarded task reconfiguration requests asynchronous, run 
on a separate thread, and backoff before retrying when they fail.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka 
task-reconfiguration-async-with-backoff

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/422.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #422


commit 8a30a78b9222ed8fec5143a41db5cf8e6e9efbc7
Author: Ewen Cheslack-Postava 
Date:   2015-11-03T05:30:32Z

KAFKA-2743: Make forwarded task reconfiguration requests asynchronous, run 
on a separate thread, and backoff before retrying when they fail.




> Forwarding task reconfigurations in Copycat can deadlock with rebalances and 
> has no backoff
> ---
>
> Key: KAFKA-2743
> URL: https://issues.apache.org/jira/browse/KAFKA-2743
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.0
>
>
> There are two issues with the way we're currently forwarding task 
> reconfigurations. First, the forwarding is performed synchronously in the 
> DistributedHerder's main processing loop. If node A forwards a task 
> reconfiguration and node B has started a rebalance process, we can end up 
> with distributed deadlock because node A will be blocking on the HTTP request 
> in the thread that would otherwise handle heartbeating and rebalancing.
> Second, currently we just retry aggressively with no backoff. In some cases 
> the node that is currently thought to be the leader will legitimately be down 
> (it shutdown and the node sending the request didn't rebalance yet), so we 
> need some backoff to avoid unnecessarily hammering the network and the huge 
> log files that result from constant errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2744) WorkerSourceTask commits offsets too early when stopping

2015-11-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14990239#comment-14990239
 ] 

ASF GitHub Bot commented on KAFKA-2744:
---

GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/423

KAFKA-2744: Commit source task offsets after task is completely stopped to 
ensure no additional messages are processed during the offset commit when 
stopping tasks for rebalancing.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka 
commit-source-offsets-after-work-thread-exits

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/423.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #423


commit 5c44efa57215153d8d8b3ca6ad50a5f878602f79
Author: Ewen Cheslack-Postava 
Date:   2015-11-04T05:42:23Z

KAFKA-2744: Commit source task offsets after task is completely stopped to 
ensure no additional messages are processed during the offset commit when 
stopping tasks for rebalancing.




> WorkerSourceTask commits offsets too early when stopping
> 
>
> Key: KAFKA-2744
> URL: https://issues.apache.org/jira/browse/KAFKA-2744
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.0
>
>
> The call to commit offsets appears ok at first glance because we've invoked 
> the SourceTask's stop method, but it runs before we've stopped the work 
> thread which could still be invoking the SourceTask's poll() method and may 
> have outstanding data. We need to wait until we're sure the work thread has 
> completely finished so we're guaranteed to flush all the data generated by 
> the SourceTask and commit the final offsets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2742) SourceTaskOffsetCommitter does not properly remove commit tasks when they are already in progress

2015-11-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14990236#comment-14990236
 ] 

ASF GitHub Bot commented on KAFKA-2742:
---

GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/421

KAFKA-2742: Fix SourceTaskOffsetCommitter to handle removal of commit tasks 
when they are already in progress.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka 
wait-on-in-progress-source-offset-commits

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/421.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #421


commit 023b3be05978ee714955fdfc04809b7e286aaca1
Author: Ewen Cheslack-Postava 
Date:   2015-11-03T05:26:39Z

KAFKA-2742: Fix SourceTaskOffsetCommitter to handle removal of commit tasks 
when they are already in progress.




> SourceTaskOffsetCommitter does not properly remove commit tasks when they are 
> already in progress
> -
>
> Key: KAFKA-2742
> URL: https://issues.apache.org/jira/browse/KAFKA-2742
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.0
>
>
> The current implementation is relying on ScheduledExecutorService to cancel 
> the task, but this doesn't handle in-progress tasks and can result in 
> stopping source tasks not completing a final offset commit before considering 
> the task fully stopped. This can allow rebalancing to proceed before offsets 
> are fully committed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2744: Commit source task offsets after t...

2015-11-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/423


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2744) WorkerSourceTask commits offsets too early when stopping

2015-11-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14990286#comment-14990286
 ] 

ASF GitHub Bot commented on KAFKA-2744:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/423


> WorkerSourceTask commits offsets too early when stopping
> 
>
> Key: KAFKA-2744
> URL: https://issues.apache.org/jira/browse/KAFKA-2744
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.0
>
>
> The call to commit offsets appears ok at first glance because we've invoked 
> the SourceTask's stop method, but it runs before we've stopped the work 
> thread which could still be invoking the SourceTask's poll() method and may 
> have outstanding data. We need to wait until we're sure the work thread has 
> completely finished so we're guaranteed to flush all the data generated by 
> the SourceTask and commit the final offsets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2744) WorkerSourceTask commits offsets too early when stopping

2015-11-04 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-2744.
--
Resolution: Fixed

Issue resolved by pull request 423
[https://github.com/apache/kafka/pull/423]

> WorkerSourceTask commits offsets too early when stopping
> 
>
> Key: KAFKA-2744
> URL: https://issues.apache.org/jira/browse/KAFKA-2744
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.0
>
>
> The call to commit offsets appears ok at first glance because we've invoked 
> the SourceTask's stop method, but it runs before we've stopped the work 
> thread which could still be invoking the SourceTask's poll() method and may 
> have outstanding data. We need to wait until we're sure the work thread has 
> completely finished so we're guaranteed to flush all the data generated by 
> the SourceTask and commit the final offsets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2745) Update JavaDoc for the new / updated APIs

2015-11-04 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-2745:


 Summary: Update JavaDoc for the new / updated APIs
 Key: KAFKA-2745
 URL: https://issues.apache.org/jira/browse/KAFKA-2745
 Project: Kafka
  Issue Type: Sub-task
Reporter: Guozhang Wang
Assignee: Guozhang Wang
 Fix For: 0.9.0.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2746) Add support for using ConsumerGroupCommand on secure install

2015-11-04 Thread Ashish K Singh (JIRA)
Ashish K Singh created KAFKA-2746:
-

 Summary: Add support for using ConsumerGroupCommand on secure 
install
 Key: KAFKA-2746
 URL: https://issues.apache.org/jira/browse/KAFKA-2746
 Project: Kafka
  Issue Type: Task
  Components: tools
Affects Versions: 0.9.0.0
Reporter: Ashish K Singh
Assignee: Ashish K Singh


KAFKA-2490 adds support for new-consumer to ConsumerGroupCommand. This JIRA 
intends to make ConsumerGroupCommand work for secure installations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2746) Add support for using ConsumerGroupCommand on secure install

2015-11-04 Thread Ashish K Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14990363#comment-14990363
 ] 

Ashish K Singh commented on KAFKA-2746:
---

[~hachikuji], [~guozhang], based on discussion with [~gwenshap], I am 
separating out work to make consumerGroupComamnd work on secure installs as a 
separate JIRA. You guys can go ahead with reviewing KAFKA-2490.

> Add support for using ConsumerGroupCommand on secure install
> 
>
> Key: KAFKA-2746
> URL: https://issues.apache.org/jira/browse/KAFKA-2746
> Project: Kafka
>  Issue Type: Task
>  Components: tools
>Affects Versions: 0.9.0.0
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> KAFKA-2490 adds support for new-consumer to ConsumerGroupCommand. This JIRA 
> intends to make ConsumerGroupCommand work for secure installations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #97

2015-11-04 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-2691: Improve handling of authorization failure during metadata

[wangguoz] KAFKA-2744: Commit source task offsets after task is completely 
stopped

--
Started by an SCM change
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-6 (docker Ubuntu ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 70a784b64ab61bcd517619fed44419d59d467b27 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 70a784b64ab61bcd517619fed44419d59d467b27
 > git rev-list c30ee50d82131ead8bc64223ae5970555b0c78cf # timeout=10
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson5124695743576565307.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.1/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:downloadWrapper UP-TO-DATE

BUILD SUCCESSFUL

Total time: 15.825 secs
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson4801982855835509181.sh
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll docsJarAll 
testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.8/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:clean UP-TO-DATE
:clients:clean
:contrib:clean UP-TO-DATE
:copycat:clean UP-TO-DATE
:core:clean
:examples:clean
:log4j-appender:clean
:streams:clean
:tools:clean
:contrib:hadoop-consumer:clean
:contrib:hadoop-producer:clean
:copycat:api:clean
:copycat:file:clean
:copycat:json:clean
:copycat:runtime:clean
:jar_core_2_10_5
Building project 'core' with Scala version 2.10.5
:kafka-trunk-jdk8:clients:compileJava
:jar_core_2_10_5 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 14.567 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45


Build failed in Jenkins: kafka-trunk-jdk7 #758

2015-11-04 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2744: Commit source task offsets after task is completely 
stopped

--
[...truncated 124 lines...]
:kafka-trunk-jdk7:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk7:clients:createVersionFile
:kafka-trunk-jdk7:clients:jar UP-TO-DATE
:kafka-trunk-jdk7:core:compileJava UP-TO-DATE
:kafka-trunk-jdk7:core:compileScala
:78:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

  ^
:37:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 expireTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP) {

  ^
:392:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
  if (value.expireTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

^
:273:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
if (offsetAndMetadata.commitTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

  ^
:293:
 a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
ControllerStats.uncleanLeaderElectionRate
^
:294:
 a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
ControllerStats.leaderElectionTimer
^
:115:
 value METADATA_FETCH_TIMEOUT_CONFIG in object ProducerConfig is deprecated: 
see corresponding Javadoc for more information.
props.put(ProducerConfig.METADATA_FETCH_TIMEOUT_CONFIG, 
config.metadataFetchTimeoutMs.toString)
 ^
:117:
 value TIMEOUT_CONFIG in object ProducerConfig is deprecated: see corresponding 
Javadoc for more information.
props.put(ProducerConfig.TIMEOUT_CONFIG, config.requestTimeoutMs.toString)
 ^
:121:
 value BLOCK_ON_BUFFER_FULL_CONFIG in object ProducerConfig is deprecated: see 
corresponding Javadoc for more information.
  props.put(ProducerConfig.BLOCK_ON_BUFFER_FULL_CONFIG, "false")
   ^
:75:
 value BLOCK_ON_BUFFER_FULL_CONFIG in object ProducerConfig is deprecated: see 
corresponding Javadoc for more information.
producerProps.put(ProducerConfig.BLOCK_ON_BUFFER_FULL_CONFIG, "true")
 ^
:194:
 value BLOCK_ON_BUFFER_FULL_CONFIG in object ProducerConfig is deprecated: see 
corresponding Javadoc for more information.
  maybeSetDefaultProperty(producerProps, 
ProducerConfig.BLOCK_ON_BUFFER_FULL_CONFIG, "true")
 

[GitHub] kafka pull request: HOTFIX: Fix incorrect version used for group m...

2015-11-04 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/424

HOTFIX: Fix incorrect version used for group metadata version



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka hotfix-metadata-storage

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/424.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #424


commit 3b76e938b76079eae68b70b1607159ff62e77832
Author: Jason Gustafson 
Date:   2015-11-04T22:29:53Z

HOTFIX: Fix incorrect version used for group metadata version




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: HOTFIX: Fix incorrect version used for group m...

2015-11-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/424


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-2697: client-side support for leave grou...

2015-11-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/414


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-2697) add leave group logic to the consumer

2015-11-04 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-2697.
--
Resolution: Fixed

Issue resolved by pull request 414
[https://github.com/apache/kafka/pull/414]

> add leave group logic to the consumer
> -
>
> Key: KAFKA-2697
> URL: https://issues.apache.org/jira/browse/KAFKA-2697
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Onur Karaman
>Assignee: Jason Gustafson
> Fix For: 0.9.0.0
>
>
> KAFKA-2397 added logic on the coordinator to handle LeaveGroupRequests. We 
> need to add logic to KafkaConsumer to send out a LeaveGroupRequest on close.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2697) add leave group logic to the consumer

2015-11-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14990643#comment-14990643
 ] 

ASF GitHub Bot commented on KAFKA-2697:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/414


> add leave group logic to the consumer
> -
>
> Key: KAFKA-2697
> URL: https://issues.apache.org/jira/browse/KAFKA-2697
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Onur Karaman
>Assignee: Jason Gustafson
> Fix For: 0.9.0.0
>
>
> KAFKA-2397 added logic on the coordinator to handle LeaveGroupRequests. We 
> need to add logic to KafkaConsumer to send out a LeaveGroupRequest on close.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #98

2015-11-04 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] HOTFIX: Fix incorrect version used for group metadata version

[wangguoz] KAFKA-2697: client-side support for leave group

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H11 (Ubuntu ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision ef5d168cc8f10ad4f0efe9df4cbe849a4b35496e 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f ef5d168cc8f10ad4f0efe9df4cbe849a4b35496e
 > git rev-list 70a784b64ab61bcd517619fed44419d59d467b27 # timeout=10
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson8440213686875189815.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.1/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:downloadWrapper UP-TO-DATE

BUILD SUCCESSFUL

Total time: 13.034 secs
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson24532860784950184.sh
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll docsJarAll 
testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.8/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:contrib:clean UP-TO-DATE
:copycat:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:contrib:hadoop-consumer:clean UP-TO-DATE
:contrib:hadoop-producer:clean UP-TO-DATE
:copycat:api:clean UP-TO-DATE
:copycat:file:clean UP-TO-DATE
:copycat:json:clean UP-TO-DATE
:copycat:runtime:clean UP-TO-DATE
:jar_core_2_10_5
Building project 'core' with Scala version 2.10.5
:kafka-trunk-jdk8:clients:compileJava
:jar_core_2_10_5 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 11.185 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45


[GitHub] kafka pull request: KAFKA-2745: Update JavaDoc for new / updated c...

2015-11-04 Thread guozhangwang
GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/425

KAFKA-2745: Update JavaDoc for new / updated consumer APIs



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka K2745

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/425.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #425


commit 6a9b18127ebd26cab5ab8b2a66bc6a296e982c50
Author: Guozhang Wang 
Date:   2015-11-04T23:06:38Z

header docs

commit 297842087979a893de5447055d5fe89a00bae1d4
Author: Guozhang Wang 
Date:   2015-11-04T23:06:49Z

Merge branch 'trunk' of https://github.com/apache/kafka into K2745

commit d4855042eb29eb90ab05246ec4f06f4c3d4325e3
Author: Guozhang Wang 
Date:   2015-11-04T23:27:07Z

Function java doc




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2745) Update JavaDoc for the new / updated APIs

2015-11-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14990700#comment-14990700
 ] 

ASF GitHub Bot commented on KAFKA-2745:
---

GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/425

KAFKA-2745: Update JavaDoc for new / updated consumer APIs



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka K2745

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/425.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #425


commit 6a9b18127ebd26cab5ab8b2a66bc6a296e982c50
Author: Guozhang Wang 
Date:   2015-11-04T23:06:38Z

header docs

commit 297842087979a893de5447055d5fe89a00bae1d4
Author: Guozhang Wang 
Date:   2015-11-04T23:06:49Z

Merge branch 'trunk' of https://github.com/apache/kafka into K2745

commit d4855042eb29eb90ab05246ec4f06f4c3d4325e3
Author: Guozhang Wang 
Date:   2015-11-04T23:27:07Z

Function java doc




> Update JavaDoc for the new / updated APIs
> -
>
> Key: KAFKA-2745
> URL: https://issues.apache.org/jira/browse/KAFKA-2745
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
> Fix For: 0.9.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: add test case for fetching from a compa...

2015-11-04 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/426

MINOR: add test case for fetching from a compacted topic



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka compacted-topics

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/426.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #426


commit de24860790d59910701b924cb63e81f8ff3c9fe2
Author: Jason Gustafson 
Date:   2015-11-04T18:59:18Z

MINOR: add test case for fetching from a compacted topic




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk7 #759

2015-11-04 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] HOTFIX: Fix incorrect version used for group metadata version

[wangguoz] KAFKA-2697: client-side support for leave group

--
[...truncated 3178 lines...]
at 
kafka.integration.SaslSslTopicMetadataTest.kafka$api$SaslTestHarness$$super$tearDown(SaslSslTopicMetadataTest.scala:25)
at kafka.api.SaslTestHarness$class.tearDown(SaslTestHarness.scala:73)
at 
kafka.integration.SaslSslTopicMetadataTest.tearDown(SaslSslTopicMetadataTest.scala:25)

kafka.integration.SaslSslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.SaslSslTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SaslSslTopicMetadataTest > testTopicMetadataRequest FAILED
org.apache.kafka.common.KafkaException: File 
/tmp/kafka4620736078055821718.tmpcannot be read.
at 
org.apache.kafka.common.security.JaasUtils.isZkSecurityEnabled(JaasUtils.java:95)
at 
kafka.zk.ZooKeeperTestHarness$class.setUp(ZooKeeperTestHarness.scala:67)
at 
kafka.integration.BaseTopicMetadataTest.setUp(BaseTopicMetadataTest.scala:48)
at 
kafka.integration.SaslSslTopicMetadataTest.kafka$api$SaslTestHarness$$super$setUp(SaslSslTopicMetadataTest.scala:25)
at kafka.api.SaslTestHarness$class.setUp(SaslTestHarness.scala:38)
at 
kafka.integration.SaslSslTopicMetadataTest.setUp(SaslSslTopicMetadataTest.scala:25)

java.lang.NullPointerException
at 
kafka.integration.BaseTopicMetadataTest.tearDown(BaseTopicMetadataTest.scala:63)
at 
kafka.integration.SaslSslTopicMetadataTest.kafka$api$SaslTestHarness$$super$tearDown(SaslSslTopicMetadataTest.scala:25)
at kafka.api.SaslTestHarness$class.tearDown(SaslTestHarness.scala:73)
at 
kafka.integration.SaslSslTopicMetadataTest.tearDown(SaslSslTopicMetadataTest.scala:25)

kafka.integration.SaslSslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.SslTopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack 
PASSED

kafka.integration.SslTopicMetadataTest > testAutoCreateTopicWithCollision PASSED

kafka.integration.SslTopicMetadataTest > testAliveBrokerListWithNoTopics PASSED

kafka.integration.SslTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SslTopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.SslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.SslTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SslTopicMetadataTest > testTopicMetadataRequest PASSED

kafka.integration.SslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithCollision PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokerListWithNoTopics PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testGetAllTopicMetadata 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testTopicMetadataRequest 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.utils.UtilsTest > testAbs PASSED

kafka.utils.UtilsTest > testReplaceSuffix PASSED

kafka.utils.UtilsTest > testDoublyLinkedList PASSED

kafka.utils.UtilsTest > testCircularIterator PASSED

kafka.utils.UtilsTest > testReadBytes PASSED

kafka.utils.UtilsTest > testCsvList PASSED

kafka.utils.UtilsTest > testReadInt PASSED

kafka.utils.UtilsTest > testCsvMap PASSED

kafka.utils.UtilsTest > testInLock PASSED

kafka.utils.UtilsTest > testSwallow PASSED

kafka.utils.SchedulerTest > testMockSchedulerNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testMockSchedulerPeriodicTask PASSED

kafka.utils.SchedulerTest > testNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testRestart PASSED

kafka.utils.SchedulerTest > testReentrantTaskInMockScheduler PASSED

kafka.utils.SchedulerTest > testPeriodicTask PASSED

kafka.utils.ByteBoundedBlockingQueueTest > testByteBoundedBlockingQueue PASSED

kafka.utils.CommandLineUtilsTest > testParseEmptyArg

[GitHub] kafka pull request: MINOR: add test case for fetching from a compa...

2015-11-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/426


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk7 #760

2015-11-04 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] MINOR: add test case for fetching from a compacted topic

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H11 (Ubuntu ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 68f42210a1c8ce64846ffdc2cdbecc6fa5b87739 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 68f42210a1c8ce64846ffdc2cdbecc6fa5b87739
 > git rev-list ef5d168cc8f10ad4f0efe9df4cbe849a4b35496e # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson3022832669781397908.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:downloadWrapper UP-TO-DATE

BUILD SUCCESSFUL

Total time: 12.148 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson6853820461572021559.sh
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll docsJarAll 
testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.8/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:contrib:clean UP-TO-DATE
:copycat:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:contrib:hadoop-consumer:clean UP-TO-DATE
:contrib:hadoop-producer:clean UP-TO-DATE
:copycat:api:clean UP-TO-DATE
:copycat:file:clean UP-TO-DATE
:copycat:json:clean UP-TO-DATE
:copycat:runtime:clean UP-TO-DATE
:jar_core_2_10_5
Building project 'core' with Scala version 2.10.5
:kafka-trunk-jdk7:clients:compileJava
:jar_core_2_10_5 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 13.349 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51


[GitHub] kafka pull request: KAFKA-2258[WIP]: add failover to mirrormaker t...

2015-11-04 Thread granders
GitHub user granders opened a pull request:

https://github.com/apache/kafka/pull/427

KAFKA-2258[WIP]: add failover to mirrormaker test

This PR adds failover to simple end to end mirror maker test

Marked as WIP for 2 reasons:
- We may want to add a couple more test cases where kafka is being used to 
store offsets
- There appears to be a test failure in the hard failover case

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/confluentinc/kafka KAFKA-2258-mirrormaker-test

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/427.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #427


commit b5db56912720e98ab3f191370a6dadca4efed1f9
Author: Geoff Anderson 
Date:   2015-10-28T23:59:51Z

Added sketch of mirror maker failover

commit 1c9bc561bf5047360974cfb121f039c63a2bf1a8
Author: Geoff Anderson 
Date:   2015-11-05T01:41:46Z

Cleaned up logging

commit 62cfe5db519975f74e4ccf5ff5695b36322ae6cb
Author: Geoff Anderson 
Date:   2015-11-05T01:44:02Z

Removed extra spaces from producer.properties




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2258) Port mirrormaker_testsuite

2015-11-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14990928#comment-14990928
 ] 

ASF GitHub Bot commented on KAFKA-2258:
---

GitHub user granders opened a pull request:

https://github.com/apache/kafka/pull/427

KAFKA-2258[WIP]: add failover to mirrormaker test

This PR adds failover to simple end to end mirror maker test

Marked as WIP for 2 reasons:
- We may want to add a couple more test cases where kafka is being used to 
store offsets
- There appears to be a test failure in the hard failover case

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/confluentinc/kafka KAFKA-2258-mirrormaker-test

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/427.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #427


commit b5db56912720e98ab3f191370a6dadca4efed1f9
Author: Geoff Anderson 
Date:   2015-10-28T23:59:51Z

Added sketch of mirror maker failover

commit 1c9bc561bf5047360974cfb121f039c63a2bf1a8
Author: Geoff Anderson 
Date:   2015-11-05T01:41:46Z

Cleaned up logging

commit 62cfe5db519975f74e4ccf5ff5695b36322ae6cb
Author: Geoff Anderson 
Date:   2015-11-05T01:44:02Z

Removed extra spaces from producer.properties




> Port mirrormaker_testsuite
> --
>
> Key: KAFKA-2258
> URL: https://issues.apache.org/jira/browse/KAFKA-2258
> Project: Kafka
>  Issue Type: Sub-task
>  Components: system tests
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
> Fix For: 0.9.0.0
>
>
> Port mirrormaker_testsuite to run on ducktape



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2747) Message loss if mirror maker is killed with hard kill and then restarted

2015-11-04 Thread Geoff Anderson (JIRA)
Geoff Anderson created KAFKA-2747:
-

 Summary: Message loss if mirror maker is killed with hard kill and 
then restarted
 Key: KAFKA-2747
 URL: https://issues.apache.org/jira/browse/KAFKA-2747
 Project: Kafka
  Issue Type: Bug
Reporter: Geoff Anderson


I recently added simple failover to the existing mirror maker test 
(https://github.com/apache/kafka/pull/427) and found that killing mirror maker 
process with a hard kill resulted in message loss.

The test here has two single-node broker clusters, one producer producing to 
the source cluster, one consumer consuming from the target cluster, and a 
single mirror maker instance mirroring data between the two clusters.

mirror maker is using old consumer, zookeeper for offset storage



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2530) metrics for old replica fetcher thread need to be deregistered

2015-11-04 Thread Otis Gospodnetic (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14990953#comment-14990953
 ] 

Otis Gospodnetic commented on KAFKA-2530:
-

[~junrao] would you consider getting this into 0.9.x if we can provide a patch?

> metrics for old replica fetcher thread need to be deregistered
> --
>
> Key: KAFKA-2530
> URL: https://issues.apache.org/jira/browse/KAFKA-2530
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.1
>Reporter: Jun Rao
>
> Currently, the lag metrics in the replica fetcher has the following format 
> where the leader broker id is included in the clientId tag.
> clientId="ReplicaFetcherThread-0-101",partition="0",topic="test",mbean_property_type="FetcherLagMetrics",Value="262"
> There are a couple of issues. (1) When the replica changes from a follower to 
> a leader, we will need to set the lag to 0 or deregister the metric. (2) 
> Similarly, when the follower switch to another leader, we should deregister 
> the metric or clear the value. Also, we probably should remove the leader 
> broker id from the clientId tag. That way, the metric name doesn't change 
> when the follower switches leaders.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2258) Port mirrormaker_testsuite

2015-11-04 Thread Geoff Anderson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoff Anderson updated KAFKA-2258:
--
Reviewer: Ewen Cheslack-Postava

> Port mirrormaker_testsuite
> --
>
> Key: KAFKA-2258
> URL: https://issues.apache.org/jira/browse/KAFKA-2258
> Project: Kafka
>  Issue Type: Sub-task
>  Components: system tests
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
> Fix For: 0.9.0.0
>
>
> Port mirrormaker_testsuite to run on ducktape



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2258) Port mirrormaker_testsuite

2015-11-04 Thread Geoff Anderson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoff Anderson updated KAFKA-2258:
--
Status: Patch Available  (was: Open)

> Port mirrormaker_testsuite
> --
>
> Key: KAFKA-2258
> URL: https://issues.apache.org/jira/browse/KAFKA-2258
> Project: Kafka
>  Issue Type: Sub-task
>  Components: system tests
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
> Fix For: 0.9.0.0
>
>
> Port mirrormaker_testsuite to run on ducktape



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #99

2015-11-04 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] MINOR: add test case for fetching from a compacted topic

--
[...truncated 389 lines...]
[ant:scaladoc] ^
[ant:scaladoc] 
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/server/KafkaServer.scala:294:
 warning: a pure expression does nothing in statement position; you may be 
omitting necessary parentheses
[ant:scaladoc] ControllerStats.leaderElectionTimer
[ant:scaladoc] ^
[ant:scaladoc] warning: there were 15 feature warning(s); re-run with -feature 
for details
[ant:scaladoc] 
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/utils/ByteBoundedBlockingQueue.scala:72:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#offer".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/utils/ByteBoundedBlockingQueue.scala:32:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#offer".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/utils/ByteBoundedBlockingQueue.scala:137:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#poll".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/utils/ByteBoundedBlockingQueue.scala:120:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#poll".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/utils/ByteBoundedBlockingQueue.scala:97:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#put".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/utils/ByteBoundedBlockingQueue.scala:152:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#take".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 9 warnings found
:kafka-trunk-jdk8:core:scaladocJar
:kafka-trunk-jdk8:core:docsJar
:docsJar_2_11_7
Building project 'core' with Scala version 2.11.7
:kafka-trunk-jdk8:clients:compileJava UP-TO-DATE
:kafka-trunk-jdk8:clients:processResources UP-TO-DATE
:kafka-trunk-jdk8:clients:classes UP-TO-DATE
:kafka-trunk-jdk8:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk8:clients:createVersionFile
:kafka-trunk-jdk8:clients:jar UP-TO-DATE
:kafka-trunk-jdk8:clients:javadoc UP-TO-DATE
:kafka-trunk-jdk8:core:compileJava UP-TO-DATE
:kafka-trunk-jdk8:core:compileScalaJava HotSpot(TM) 64-Bit Server VM warning: 
ignoring option MaxPermSize=512m; support was removed in 8.0

/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/api/OffsetCommitRequest.scala:78:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/common/OffsetMetadataAndError.scala:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

  ^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/common/OffsetMetadataAndError.scala:37:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 expireTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP) {

  ^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/coordinator/GroupMetadataManager.scala:392:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
  if (value.expireTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/server/KafkaApis.scala:273:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more informatio

[jira] [Created] (KAFKA-2748) SinkTasks do not handle rebalances and offset commit properly

2015-11-04 Thread Ewen Cheslack-Postava (JIRA)
Ewen Cheslack-Postava created KAFKA-2748:


 Summary: SinkTasks do not handle rebalances and offset commit 
properly
 Key: KAFKA-2748
 URL: https://issues.apache.org/jira/browse/KAFKA-2748
 Project: Kafka
  Issue Type: Bug
  Components: copycat
Reporter: Ewen Cheslack-Postava
Assignee: Ewen Cheslack-Postava
 Fix For: 0.9.0.0


Since the initial SinkTask code was originally written with an early version of 
the new consumer, it wasn't setup to handle rebalances properly. Since we 
recently added the rebalance listener, we can use it to correctly commit 
offsets. However, the existing code also has two issues. First, in the case of 
a failure to flush data in the sink task, we are not correctly rewinding to the 
last committed offsets. We need to do this since we cannot be sure what 
happened to the outstanding data, so we need to reprocess it. 

Second, flushing when stopping was not being handled propertly. The existing 
code was assuming that as part of SinkTask.stop() we would. However, this did 
not make sense since SinkTask.stop() was being invoked before the worker thread 
was stopped, so we could end up committing the wrong offsets. Instead, we need 
to wait for the worker thread to finish whatever it is currently doing, do one 
final flush + commit offsets, and only then invoke stop() to allow the task to 
do final cleanup. This is a bit confusing because stop means different things 
for source and sink tasks since they have pull vs push semantics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2749) Failure of end to end latency test during nightly run

2015-11-04 Thread Geoff Anderson (JIRA)
Geoff Anderson created KAFKA-2749:
-

 Summary: Failure of end to end latency test during nightly run
 Key: KAFKA-2749
 URL: https://issues.apache.org/jira/browse/KAFKA-2749
 Project: Kafka
  Issue Type: Bug
Reporter: Geoff Anderson


With SSL enabled, end to end latency timed out during the following nightly run:

http://testing.confluent.io/kafka/2015-11-04--001/Benchmark/test_end_to_end_latency/security_protocol=SSL/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2749) Failure of end to end latency test during nightly run

2015-11-04 Thread Geoff Anderson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14990993#comment-14990993
 ] 

Geoff Anderson commented on KAFKA-2749:
---

[~rsivaram] Would you be interested in looking into this?

> Failure of end to end latency test during nightly run
> -
>
> Key: KAFKA-2749
> URL: https://issues.apache.org/jira/browse/KAFKA-2749
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>
> With SSL enabled, end to end latency timed out during the following nightly 
> run:
> http://testing.confluent.io/kafka/2015-11-04--001/Benchmark/test_end_to_end_latency/security_protocol=SSL/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2747) Message loss if mirror maker is killed with hard kill and then restarted

2015-11-04 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14991026#comment-14991026
 ] 

Jiangjie Qin commented on KAFKA-2747:
-

[~geoffra], could you verify the following two things?
1. Is the group id of consumer the same after mirror maker restarts?
2. Is the committed offset out of range after mirror maker restarts?

Both are caused by the fact that we are setting the auto.offset.reset to 
largest.

> Message loss if mirror maker is killed with hard kill and then restarted
> 
>
> Key: KAFKA-2747
> URL: https://issues.apache.org/jira/browse/KAFKA-2747
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>
> I recently added simple failover to the existing mirror maker test 
> (https://github.com/apache/kafka/pull/427) and found that killing mirror 
> maker process with a hard kill resulted in message loss.
> The test here has two single-node broker clusters, one producer producing to 
> the source cluster, one consumer consuming from the target cluster, and a 
> single mirror maker instance mirroring data between the two clusters.
> mirror maker is using old consumer, zookeeper for offset storage



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2750) Sender.java: handleProduceResponse does not check protocol version

2015-11-04 Thread Geoff Anderson (JIRA)
Geoff Anderson created KAFKA-2750:
-

 Summary: Sender.java: handleProduceResponse does not check 
protocol version
 Key: KAFKA-2750
 URL: https://issues.apache.org/jira/browse/KAFKA-2750
 Project: Kafka
  Issue Type: Bug
Reporter: Geoff Anderson


If you try run an 0.9 producer against 0.8.2.2 kafka broker, you get a fairly 
cryptic error message:

[2015-11-04 18:55:43,583] ERROR Uncaught error in kafka producer I/O thread:  
(org.apache.kafka.clients.producer.internals.Sender)
org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
'throttle_time_ms': java.nio.BufferUnderflowException
at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:71)
at 
org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:462)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:279)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:141)

Although we shouldn't expect an 0.9 producer to work against an 0.8.X broker 
since the protocol version has been increased, perhaps the error could be 
clearer.

The cause seems to be that in Sender.java, handleProduceResponse does not to 
have any mechanism for checking the protocol version of the received produce 
response - it just calls a constructor which blindly tries to grab the throttle 
time field which in this case fails.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2738: Replica FetcherThread should conne...

2015-11-04 Thread gwenshap
GitHub user gwenshap opened a pull request:

https://github.com/apache/kafka/pull/428

KAFKA-2738: Replica FetcherThread should connect to leader endpoint m…

…atching its inter-broker security protocol

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gwenshap/kafka KAFKA-2738

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/428.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #428


commit 994fa156a0f6e818184182e7e5c852ece58191ce
Author: Gwen Shapira 
Date:   2015-11-05T06:30:13Z

KAFKA-2738: Replica FetcherThread should connect to leader endpoint 
matching its inter-broker security protocol




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2738) Can't set SSL as inter-broker-protocol by rolling restart of brokers

2015-11-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14991228#comment-14991228
 ] 

ASF GitHub Bot commented on KAFKA-2738:
---

GitHub user gwenshap opened a pull request:

https://github.com/apache/kafka/pull/428

KAFKA-2738: Replica FetcherThread should connect to leader endpoint m…

…atching its inter-broker security protocol

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gwenshap/kafka KAFKA-2738

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/428.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #428


commit 994fa156a0f6e818184182e7e5c852ece58191ce
Author: Gwen Shapira 
Date:   2015-11-05T06:30:13Z

KAFKA-2738: Replica FetcherThread should connect to leader endpoint 
matching its inter-broker security protocol




> Can't set SSL as inter-broker-protocol by rolling restart of brokers
> 
>
> Key: KAFKA-2738
> URL: https://issues.apache.org/jira/browse/KAFKA-2738
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Ben Stopford
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Scenario (as carefully documented by [~benstopford]:
> 1. Start 2 or more brokers with listeners on both PLAINTEXT and SSL 
> protocols, and PLAINTEXT as security.inter.broker.protocol:
> inter.broker.protocol.version = 0.9.0.X
> security.inter.broker.protocol = PLAINTEXT
> listeners = PLAINTEXT://:9092,SSL://:9093
> 2. Stop one of the brokers and change security.inter.broker.protocol to SSL
> inter.broker.protocol.version = 0.9.0.X
> security.inter.broker.protocol = SSL
> listeners = PLAINTEXT://:9092,SSL://:9093
> 3. Start that broker again.
> You will get replication errors as it will attempt to use SSL on a PLAINTEXT 
> port:
> {code}
> WARN ReplicaFetcherThread-0-3, Error in fetch 
> kafka.server.ReplicaFetcherThread$FetchRequest@78ca3ba1. Possible cause: 
> java.io.IOException: Connection to Node(3, worker4, 9092) failed 
> (kafka.server.ReplicaFetcherThread)
> WARN Failed to send SSL Close message 
> (org.apache.kafka.common.network.SslTransportLayer)
> java.io.IOException: Broken pipe
> at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
> at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
> at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
> at sun.nio.ch.IOUtil.write(IOUtil.java:65)
> at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:492)
> at 
> org.apache.kafka.common.network.SslTransportLayer.flush(SslTransportLayer.java:188)
> at 
> org.apache.kafka.common.network.SslTransportLayer.close(SslTransportLayer.java:161)
> at org.apache.kafka.common.network.KafkaChannel.close(KafkaChannel.java:50)
> at org.apache.kafka.common.network.Selector.close(Selector.java:448)
> at org.apache.kafka.common.network.Selector.poll(Selector.java:316)
> at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:270)
> at 
> kafka.utils.NetworkClientBlockingOps$.recurse$1(NetworkClientBlockingOps.scala:128)
> at 
> kafka.utils.NetworkClientBlockingOps$.kafka$utils$NetworkClientBlockingOps$$pollUntilFound$extension(NetworkClientBlockingOps.scala:139)
> at 
> kafka.utils.NetworkClientBlockingOps$.kafka$utils$NetworkClientBlockingOps$$pollUntil$extension(NetworkClientBlockingOps.scala:105)
> at 
> kafka.utils.NetworkClientBlockingOps$.blockingReady$extension(NetworkClientBlockingOps.scala:58)
> at 
> kafka.server.ReplicaFetcherThread.sendRequest(ReplicaFetcherThread.scala:202)
> at kafka.server.ReplicaFetcherThread.fetch(ReplicaFetcherThread.scala:192)
> at kafka.server.ReplicaFetcherThread.fetch(ReplicaFetcherThread.scala:42)
> at 
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:102)
> at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:93)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2738) Can't set SSL as inter-broker-protocol by rolling restart of brokers

2015-11-04 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2738:

Reviewer: Jun Rao

> Can't set SSL as inter-broker-protocol by rolling restart of brokers
> 
>
> Key: KAFKA-2738
> URL: https://issues.apache.org/jira/browse/KAFKA-2738
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Ben Stopford
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Scenario (as carefully documented by [~benstopford]:
> 1. Start 2 or more brokers with listeners on both PLAINTEXT and SSL 
> protocols, and PLAINTEXT as security.inter.broker.protocol:
> inter.broker.protocol.version = 0.9.0.X
> security.inter.broker.protocol = PLAINTEXT
> listeners = PLAINTEXT://:9092,SSL://:9093
> 2. Stop one of the brokers and change security.inter.broker.protocol to SSL
> inter.broker.protocol.version = 0.9.0.X
> security.inter.broker.protocol = SSL
> listeners = PLAINTEXT://:9092,SSL://:9093
> 3. Start that broker again.
> You will get replication errors as it will attempt to use SSL on a PLAINTEXT 
> port:
> {code}
> WARN ReplicaFetcherThread-0-3, Error in fetch 
> kafka.server.ReplicaFetcherThread$FetchRequest@78ca3ba1. Possible cause: 
> java.io.IOException: Connection to Node(3, worker4, 9092) failed 
> (kafka.server.ReplicaFetcherThread)
> WARN Failed to send SSL Close message 
> (org.apache.kafka.common.network.SslTransportLayer)
> java.io.IOException: Broken pipe
> at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
> at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
> at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
> at sun.nio.ch.IOUtil.write(IOUtil.java:65)
> at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:492)
> at 
> org.apache.kafka.common.network.SslTransportLayer.flush(SslTransportLayer.java:188)
> at 
> org.apache.kafka.common.network.SslTransportLayer.close(SslTransportLayer.java:161)
> at org.apache.kafka.common.network.KafkaChannel.close(KafkaChannel.java:50)
> at org.apache.kafka.common.network.Selector.close(Selector.java:448)
> at org.apache.kafka.common.network.Selector.poll(Selector.java:316)
> at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:270)
> at 
> kafka.utils.NetworkClientBlockingOps$.recurse$1(NetworkClientBlockingOps.scala:128)
> at 
> kafka.utils.NetworkClientBlockingOps$.kafka$utils$NetworkClientBlockingOps$$pollUntilFound$extension(NetworkClientBlockingOps.scala:139)
> at 
> kafka.utils.NetworkClientBlockingOps$.kafka$utils$NetworkClientBlockingOps$$pollUntil$extension(NetworkClientBlockingOps.scala:105)
> at 
> kafka.utils.NetworkClientBlockingOps$.blockingReady$extension(NetworkClientBlockingOps.scala:58)
> at 
> kafka.server.ReplicaFetcherThread.sendRequest(ReplicaFetcherThread.scala:202)
> at kafka.server.ReplicaFetcherThread.fetch(ReplicaFetcherThread.scala:192)
> at kafka.server.ReplicaFetcherThread.fetch(ReplicaFetcherThread.scala:42)
> at 
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:102)
> at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:93)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2738) Can't set SSL as inter-broker-protocol by rolling restart of brokers

2015-11-04 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14991230#comment-14991230
 ] 

Gwen Shapira commented on KAFKA-2738:
-

The PR above fixes the issue we saw.

Two notes:
1. We keep passing metadata cache references around ReplicaManager. It will be 
nice to refactor a bit and initialize ReplicaManager with MetadataCache as a 
member, so we can avoid passing references around. I avoided this in this PR to 
keep it focused.

2. I kept running into timeouts with controlled shutdown while testing this. It 
doesn't seem like the same issue (i.e. they reproduced very inconsistently on 
both SSL and PLAINTEXT brokers), but worth keeping in mind.

> Can't set SSL as inter-broker-protocol by rolling restart of brokers
> 
>
> Key: KAFKA-2738
> URL: https://issues.apache.org/jira/browse/KAFKA-2738
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Ben Stopford
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Scenario (as carefully documented by [~benstopford]:
> 1. Start 2 or more brokers with listeners on both PLAINTEXT and SSL 
> protocols, and PLAINTEXT as security.inter.broker.protocol:
> inter.broker.protocol.version = 0.9.0.X
> security.inter.broker.protocol = PLAINTEXT
> listeners = PLAINTEXT://:9092,SSL://:9093
> 2. Stop one of the brokers and change security.inter.broker.protocol to SSL
> inter.broker.protocol.version = 0.9.0.X
> security.inter.broker.protocol = SSL
> listeners = PLAINTEXT://:9092,SSL://:9093
> 3. Start that broker again.
> You will get replication errors as it will attempt to use SSL on a PLAINTEXT 
> port:
> {code}
> WARN ReplicaFetcherThread-0-3, Error in fetch 
> kafka.server.ReplicaFetcherThread$FetchRequest@78ca3ba1. Possible cause: 
> java.io.IOException: Connection to Node(3, worker4, 9092) failed 
> (kafka.server.ReplicaFetcherThread)
> WARN Failed to send SSL Close message 
> (org.apache.kafka.common.network.SslTransportLayer)
> java.io.IOException: Broken pipe
> at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
> at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
> at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
> at sun.nio.ch.IOUtil.write(IOUtil.java:65)
> at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:492)
> at 
> org.apache.kafka.common.network.SslTransportLayer.flush(SslTransportLayer.java:188)
> at 
> org.apache.kafka.common.network.SslTransportLayer.close(SslTransportLayer.java:161)
> at org.apache.kafka.common.network.KafkaChannel.close(KafkaChannel.java:50)
> at org.apache.kafka.common.network.Selector.close(Selector.java:448)
> at org.apache.kafka.common.network.Selector.poll(Selector.java:316)
> at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:270)
> at 
> kafka.utils.NetworkClientBlockingOps$.recurse$1(NetworkClientBlockingOps.scala:128)
> at 
> kafka.utils.NetworkClientBlockingOps$.kafka$utils$NetworkClientBlockingOps$$pollUntilFound$extension(NetworkClientBlockingOps.scala:139)
> at 
> kafka.utils.NetworkClientBlockingOps$.kafka$utils$NetworkClientBlockingOps$$pollUntil$extension(NetworkClientBlockingOps.scala:105)
> at 
> kafka.utils.NetworkClientBlockingOps$.blockingReady$extension(NetworkClientBlockingOps.scala:58)
> at 
> kafka.server.ReplicaFetcherThread.sendRequest(ReplicaFetcherThread.scala:202)
> at kafka.server.ReplicaFetcherThread.fetch(ReplicaFetcherThread.scala:192)
> at kafka.server.ReplicaFetcherThread.fetch(ReplicaFetcherThread.scala:42)
> at 
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:102)
> at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:93)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >