Re: setting up kafka github

2015-11-14 Thread jeanbaptiste lespiau
Hello,

I'm new to kafka too, but I think this page can help you :
https://help.github.com/articles/using-pull-requests/

It describes exactly the process to follow.

Regards.

2015-11-14 11:49 GMT+01:00 mojhaha kiklasds :

> Hello,
>
> I'm new to github usage but I want to contribute to kafka. I am trying to
> setup my github repo based on the instructions mentioned here:
>
> https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Code+Changes#ContributingCodeChanges-PullRequest
>
> I have one doubt though. Which repo shall I configure as the remote -
> apache-kafka or my fork ?
>
> If I configure apache-kafka as remote, will I be able to submit pull
> requests?
>
> If I sync my committed changes to my fork (hosted on github), will I issue
> pull requests from this fork to apache-kafka ?
>
> Thanks,
> Mojhaha
>


setting up kafka github

2015-11-14 Thread mojhaha kiklasds
Hello,

I'm new to github usage but I want to contribute to kafka. I am trying to
setup my github repo based on the instructions mentioned here:
https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Code+Changes#ContributingCodeChanges-PullRequest

I have one doubt though. Which repo shall I configure as the remote -
apache-kafka or my fork ?

If I configure apache-kafka as remote, will I be able to submit pull
requests?

If I sync my committed changes to my fork (hosted on github), will I issue
pull requests from this fork to apache-kafka ?

Thanks,
Mojhaha


[jira] [Assigned] (KAFKA-2605) Replace `catch: Throwable` clauses with `NonFatal` or `NonControl`

2015-11-14 Thread jin xing (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jin xing reassigned KAFKA-2605:
---

Assignee: jin xing

> Replace `catch: Throwable` clauses with `NonFatal` or `NonControl`
> --
>
> Key: KAFKA-2605
> URL: https://issues.apache.org/jira/browse/KAFKA-2605
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.2.2
>Reporter: Ismael Juma
>Assignee: jin xing
>  Labels: newbie
>
> The Kafka codebase includes a number of instances where we do `catch t: 
> Throwable` where we should really be doing `catch NonFatal(t)` or `catch 
> NonControl(t)` where `NonFatal` is part of the standard library and 
> `NonControl` is something like:
> {code}
> object NonControl {
>def apply(t: Throwable): Boolean = t match {
>  case _: ControlThrowable => false
>  case _ => true
>}
>   def unapply(t: Throwable): Option[Throwable] = if (apply(t)) Some(t) else 
> None
> }
> {code}
> We can also use `NonControl` to replace cases like (it's more concise and has 
> the same behaviour):
> {code}
>   case e: ControlThrowable => throw e
>   case e: Throwable => ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2605: Replace 'catch: Throwable' clauses...

2015-11-14 Thread ZoneMayor
GitHub user ZoneMayor opened a pull request:

https://github.com/apache/kafka/pull/531

KAFKA-2605: Replace 'catch: Throwable' clauses with 'NonFatal'

'catch: Throwable' will catch VirtualMachineError, ThreadDeath, 
InterruptedException, LinkageError, ControlThrowable, NotImplementedError; we 
don't want to catch those kind of error

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ZoneMayor/kafka 0.8.2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/531.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #531


commit c466ef240e509c9aed7b2e7c6729df97abe89fff
Author: jinxing 
Date:   2015-11-15T07:11:13Z

KAFKA-2605: Replace 'catch: Throwable' clauses with 'NonFatal'




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2605) Replace `catch: Throwable` clauses with `NonFatal` or `NonControl`

2015-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005772#comment-15005772
 ] 

ASF GitHub Bot commented on KAFKA-2605:
---

GitHub user ZoneMayor opened a pull request:

https://github.com/apache/kafka/pull/531

KAFKA-2605: Replace 'catch: Throwable' clauses with 'NonFatal'

'catch: Throwable' will catch VirtualMachineError, ThreadDeath, 
InterruptedException, LinkageError, ControlThrowable, NotImplementedError; we 
don't want to catch those kind of error

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ZoneMayor/kafka 0.8.2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/531.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #531


commit c466ef240e509c9aed7b2e7c6729df97abe89fff
Author: jinxing 
Date:   2015-11-15T07:11:13Z

KAFKA-2605: Replace 'catch: Throwable' clauses with 'NonFatal'




> Replace `catch: Throwable` clauses with `NonFatal` or `NonControl`
> --
>
> Key: KAFKA-2605
> URL: https://issues.apache.org/jira/browse/KAFKA-2605
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.2.2
>Reporter: Ismael Juma
>Assignee: jin xing
>  Labels: newbie
>
> The Kafka codebase includes a number of instances where we do `catch t: 
> Throwable` where we should really be doing `catch NonFatal(t)` or `catch 
> NonControl(t)` where `NonFatal` is part of the standard library and 
> `NonControl` is something like:
> {code}
> object NonControl {
>def apply(t: Throwable): Boolean = t match {
>  case _: ControlThrowable => false
>  case _ => true
>}
>   def unapply(t: Throwable): Option[Throwable] = if (apply(t)) Some(t) else 
> None
> }
> {code}
> We can also use `NonControl` to replace cases like (it's more concise and has 
> the same behaviour):
> {code}
>   case e: ControlThrowable => throw e
>   case e: Throwable => ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2841) Group metadata cache loading is not safe when reloading a partition

2015-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005715#comment-15005715
 ] 

ASF GitHub Bot commented on KAFKA-2841:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/530

KAFKA-2841: safe group metadata cache loading/unloading



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-2841

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/530.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #530


commit 881380eac954e0906ef2ec0fe3d5d8e067473a35
Author: Jason Gustafson 
Date:   2015-11-14T23:54:25Z

KAFKA-2841: safe group metadata cache loading/unloading




> Group metadata cache loading is not safe when reloading a partition
> ---
>
> Key: KAFKA-2841
> URL: https://issues.apache.org/jira/browse/KAFKA-2841
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> If the coordinator receives a leaderAndIsr request which includes a higher 
> leader epoch for one of the partitions that it owns, then it will reload the 
> offset/metadata for that partition again. This can happen because the leader 
> epoch is incremented for ISR changes which do not result in a new leader for 
> the partition. Currently, the coordinator replaces cached metadata values 
> blindly on reloading, which can result in weird behavior such as unexpected 
> session timeouts or request timeouts while rebalancing.
> To fix this, we need to check that the group being loaded has a higher 
> generation than the cached value before replacing it. Also, if we have to 
> replace a cached value (which shouldn't happen except when loading), we need 
> to be very careful to ensure that any active delayed operations won't affect 
> the group. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2841: safe group metadata cache loading/...

2015-11-14 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/530

KAFKA-2841: safe group metadata cache loading/unloading



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-2841

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/530.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #530


commit 881380eac954e0906ef2ec0fe3d5d8e067473a35
Author: Jason Gustafson 
Date:   2015-11-14T23:54:25Z

KAFKA-2841: safe group metadata cache loading/unloading




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Comment Edited] (KAFKA-2605) Replace `catch: Throwable` clauses with `NonFatal` or `NonControl`

2015-11-14 Thread jin xing (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005727#comment-15005727
 ] 

jin xing edited comment on KAFKA-2605 at 11/15/15 5:00 AM:
---

yes,
"case e:Throwable" is dangerous;
I will make it to be NonFatal 


was (Author: jinxing6...@126.com):
https://issues.apache.org/jira/browse/KAFKA-2605?jql=project%20%3D%20KAFKA%20AND%20status%20%3D%20Open%20AND%20labels%20%3D%20newbie%20AND%20assignee%20in%20(%22jinxing6042%40126.com%22)

> Replace `catch: Throwable` clauses with `NonFatal` or `NonControl`
> --
>
> Key: KAFKA-2605
> URL: https://issues.apache.org/jira/browse/KAFKA-2605
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.2.2
>Reporter: Ismael Juma
>Assignee: jin xing
>  Labels: newbie
>
> The Kafka codebase includes a number of instances where we do `catch t: 
> Throwable` where we should really be doing `catch NonFatal(t)` or `catch 
> NonControl(t)` where `NonFatal` is part of the standard library and 
> `NonControl` is something like:
> {code}
> object NonControl {
>def apply(t: Throwable): Boolean = t match {
>  case _: ControlThrowable => false
>  case _ => true
>}
>   def unapply(t: Throwable): Option[Throwable] = if (apply(t)) Some(t) else 
> None
> }
> {code}
> We can also use `NonControl` to replace cases like (it's more concise and has 
> the same behaviour):
> {code}
>   case e: ControlThrowable => throw e
>   case e: Throwable => ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2605) Replace `catch: Throwable` clauses with `NonFatal` or `NonControl`

2015-11-14 Thread jin xing (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005727#comment-15005727
 ] 

jin xing commented on KAFKA-2605:
-

https://issues.apache.org/jira/browse/KAFKA-2605?jql=project%20%3D%20KAFKA%20AND%20status%20%3D%20Open%20AND%20labels%20%3D%20newbie%20AND%20assignee%20in%20(%22jinxing6042%40126.com%22)

> Replace `catch: Throwable` clauses with `NonFatal` or `NonControl`
> --
>
> Key: KAFKA-2605
> URL: https://issues.apache.org/jira/browse/KAFKA-2605
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.2.2
>Reporter: Ismael Juma
>Assignee: jin xing
>  Labels: newbie
>
> The Kafka codebase includes a number of instances where we do `catch t: 
> Throwable` where we should really be doing `catch NonFatal(t)` or `catch 
> NonControl(t)` where `NonFatal` is part of the standard library and 
> `NonControl` is something like:
> {code}
> object NonControl {
>def apply(t: Throwable): Boolean = t match {
>  case _: ControlThrowable => false
>  case _ => true
>}
>   def unapply(t: Throwable): Option[Throwable] = if (apply(t)) Some(t) else 
> None
> }
> {code}
> We can also use `NonControl` to replace cases like (it's more concise and has 
> the same behaviour):
> {code}
>   case e: ControlThrowable => throw e
>   case e: Throwable => ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2658) Implement SASL/PLAIN

2015-11-14 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2658:
---
Fix Version/s: 0.9.1.0

> Implement SASL/PLAIN
> 
>
> Key: KAFKA-2658
> URL: https://issues.apache.org/jira/browse/KAFKA-2658
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Critical
> Fix For: 0.9.1.0
>
>
> KAFKA-1686 supports SASL/Kerberos using GSSAPI. We should enable more SASL 
> mechanisms. SASL/PLAIN would enable a simpler use of SASL, which along with 
> SSL provides a secure Kafka that uses username/password for client 
> authentication.
> SASL/PLAIN protocol and its uses are described in 
> [https://tools.ietf.org/html/rfc4616]. It is supported in Java.
> This should be implemented after KAFKA-1686. This task should also hopefully 
> enable simpler unit testing of the SASL code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2257) port replication_testsuite

2015-11-14 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2257:
---
Affects Version/s: 0.9.0.0
Fix Version/s: (was: 0.9.0.0)

> port replication_testsuite
> --
>
> Key: KAFKA-2257
> URL: https://issues.apache.org/jira/browse/KAFKA-2257
> Project: Kafka
>  Issue Type: Sub-task
>  Components: system tests
>Affects Versions: 0.9.0.0
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
>
> Port subset of replication_testsuite to run on ducktape. Details to follow



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2256) Port system tests

2015-11-14 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2256:
---
Affects Version/s: 0.9.0.0
Fix Version/s: (was: 0.9.0.0)

> Port system tests
> -
>
> Key: KAFKA-2256
> URL: https://issues.apache.org/jira/browse/KAFKA-2256
> Project: Kafka
>  Issue Type: Improvement
>  Components: system tests
>Affects Versions: 0.9.0.0
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
>
> This is a tracking issue for the system test suites to be ported per KIP-25



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2259) port offset_management_testsuite

2015-11-14 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2259:
---
Affects Version/s: 0.9.0.0
Fix Version/s: (was: 0.9.0.0)

> port offset_management_testsuite
> 
>
> Key: KAFKA-2259
> URL: https://issues.apache.org/jira/browse/KAFKA-2259
> Project: Kafka
>  Issue Type: Sub-task
>  Components: system tests
>Affects Versions: 0.9.0.0
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
>
> Port to run on ducktape



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2311) Consumer's ensureNotClosed method not thread safe

2015-11-14 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2311:
---
Fix Version/s: (was: 0.9.0.0)
   0.9.1.0

> Consumer's ensureNotClosed method not thread safe
> -
>
> Key: KAFKA-2311
> URL: https://issues.apache.org/jira/browse/KAFKA-2311
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: Tim Brooks
>Assignee: Tim Brooks
> Fix For: 0.9.1.0
>
> Attachments: KAFKA-2311.patch, KAFKA-2311.patch
>
>
> When a call is to the consumer is made, the first check is to see that the 
> consumer is not closed. This variable is not volatile so there is no 
> guarantee previous stores will be visible before a read.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2448) BrokerChangeListener missed broker id path ephemeral node deletion event.

2015-11-14 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2448:
---
Affects Version/s: 0.9.0.0
Fix Version/s: (was: 0.9.0.0)

> BrokerChangeListener missed broker id path ephemeral node deletion event.
> -
>
> Key: KAFKA-2448
> URL: https://issues.apache.org/jira/browse/KAFKA-2448
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Flavio Junqueira
>
> When a broker get bounced, ideally the sequence should be like this:
> 1.1. Broker shutdown resources.
> 1.2. Broker close zkClient (this will cause the ephemeral node of 
> /brokers/ids/BROKER_ID to be deleted)
> 1.3. Broker restart and load the log segment
> 1.4. Broker create ephemeral node /brokers/ids/BROKER_ID
> The broker side log s are:
> {noformat}
> ...
> 2015/08/17 22:42:37.663 INFO [SocketServer] [Thread-1] [kafka-server] [] 
> [Socket Server on Broker 1140], Shutting down
> 2015/08/17 22:42:37.735 INFO [SocketServer] [Thread-1] [kafka-server] [] 
> [Socket Server on Broker 1140], Shutdown completed
> ...
> 2015/08/17 22:42:53.898 INFO [ZooKeeper] [Thread-1] [kafka-server] [] 
> Session: 0x14d43fd905f68d7 closed
> 2015/08/17 22:42:53.898 INFO [ClientCnxn] [main-EventThread] [kafka-server] 
> [] EventThread shut down
> 2015/08/17 22:42:53.898 INFO [KafkaServer] [Thread-1] [kafka-server] [] 
> [Kafka Server 1140], shut down completed
> ...
> 2015/08/17 22:43:03.306 INFO [ClientCnxn] 
> [main-SendThread(zk-ei1-kafkatest.stg.linkedin.com:12913)] [kafka-server] [] 
> Session establishment complete on server zk-ei1-kafkatest.stg.linkedin
> .com/172.20.73.211:12913, sessionid = 0x24d43fd93d96821, negotiated timeout = 
> 12000
> 2015/08/17 22:43:03.306 INFO [ZkClient] [main-EventThread] [kafka-server] [] 
> zookeeper state changed (SyncConnected)
> ...
> {noformat}
> On the controller side, the sequence should be:
> 2.1. Controlled shutdown the broker
> 2.2. BrokerChangeListener fired for /brokers/ids child change because 
> ephemeral node is deleted in step 1.2
> 2.3. BrokerChangeListener fired again for /borkers/ids child change because 
> the ephemeral node is created in 1.4
> The issue I saw was on controller side, the broker change listener only fired 
> once after step 1.4. So the controller did not see any broker change.
> {noformat}
> 2015/08/17 22:41:46.189 [KafkaController] [Controller 1507]: Shutting down 
> broker 1140
> ...
> 2015/08/17 22:42:38.031 [RequestSendThread] 
> [Controller-1507-to-broker-1140-send-thread], Controller 1507 epoch 799 fails 
> to send request Name: StopReplicaRequest; Version: 0; CorrelationId: 5334; 
> ClientId: ; DeletePartitions: false; ControllerId: 1507; ControllerEpoch: 
> 799; Partitions: [seas-decisionboard-searcher-service_call,1] to broker 1140 
> : (EndPoint(eat1-app1140.corp.linkedin.com,10251,PLAINTEXT)). Reconnecting to 
> broker.
> java.nio.channels.ClosedChannelException
> at kafka.network.BlockingChannel.send(BlockingChannel.scala:110)
> at 
> kafka.controller.RequestSendThread.liftedTree1$1(ControllerChannelManager.scala:132)
> at 
> kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:131)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
> 2015/08/17 22:42:38.031 [RequestSendThread] 
> [Controller-1507-to-broker-1140-send-thread], Controller 1507 connected to 
> 1140 : (EndPoint(eat1-app1140.corp.linkedin.com,10251,PLAINTEXT)) for sending 
> state change requests
> 2015/08/17 22:42:38.332 [RequestSendThread] 
> [Controller-1507-to-broker-1140-send-thread], Controller 1507 epoch 799 fails 
> to send request Name: StopReplicaRequest; Version: 0; CorrelationId: 5334; 
> ClientId: ; DeletePartitions: false; ControllerId: 1507; ControllerEpoch: 
> 799; Partitions: [seas-decisionboard-searcher-service_call,1] to broker 1140 
> : (EndPoint(eat1-app1140.corp.linkedin.com,10251,PLAINTEXT)). Reconnecting to 
> broker.
> java.nio.channels.ClosedChannelException
> at kafka.network.BlockingChannel.send(BlockingChannel.scala:110)
> at 
> kafka.controller.RequestSendThread.liftedTree1$1(ControllerChannelManager.scala:132)
> at 
> kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:131)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
> 
> 2015/08/17 22:43:09.035 [ReplicaStateMachine$BrokerChangeListener] 
> [BrokerChangeListener on Controller 1507]: Broker change listener fired for 
> path /brokers/ids with children 
> 1140,1282,1579,871,1556,872,1511,873,874,852,1575,875,1574,1530,854,857,858,859,1493,1272,880,1547,1568,1500,1521,863,864,865,867,1507
> 2015/08/17 22:43:09.082 [ReplicaStateMachine$BrokerChangeListener] 
> 

[jira] [Updated] (KAFKA-2378) Add Copycat embedded API

2015-11-14 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2378:
---
Fix Version/s: (was: 0.9.0.0)
   0.9.1.0

> Add Copycat embedded API
> 
>
> Key: KAFKA-2378
> URL: https://issues.apache.org/jira/browse/KAFKA-2378
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.1.0
>
>
> Much of the required Copycat API will exist from previous patches since any 
> main() method will need to do very similar operations. However, integrating 
> with any other Java code may require additional API support.
> For example, one of the use cases when integrating with any stream processing 
> application will require knowing which topics will be written to. We will 
> need to add APIs to expose the topics a registered connector is writing to so 
> they can be consumed by a stream processing task



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2682) Authorization section in official docs

2015-11-14 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-2682.

Resolution: Fixed

This is done.

> Authorization section in official docs
> --
>
> Key: KAFKA-2682
> URL: https://issues.apache.org/jira/browse/KAFKA-2682
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Parth Brahmbhatt
> Fix For: 0.9.0.0
>
>
> We need to add a section in the official documentation regarding 
> authorization:
> http://kafka.apache.org/documentation.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2672) SendFailedException when new consumer is run with SSL

2015-11-14 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2672:
---
Fix Version/s: (was: 0.9.0.0)
   0.9.1.0
  Description: 
When running new consumer with SSL, debug logs show these exceptions every time:

{quote}
[2015-10-19 20:57:43,389] DEBUG Fetch failed 
(org.apache.kafka.clients.consumer.internals.Fetcher)
org.apache.kafka.clients.consumer.internals.SendFailedException 
{quote}

The exception occurs  because send is queued before SSL handshake is complete. 
I am not sure if the exception is harmless, but it will be good to avoid the 
exception either way since it feels like an exception that exists to handle 
edge cases like buffer overflow rather than something in a normal code path.

  was:

When running new consumer with SSL, debug logs show these exceptions every time:

{quote}
[2015-10-19 20:57:43,389] DEBUG Fetch failed 
(org.apache.kafka.clients.consumer.internals.Fetcher)
org.apache.kafka.clients.consumer.internals.SendFailedException 
{quote}

The exception occurs  because send is queued before SSL handshake is complete. 
I am not sure if the exception is harmless, but it will be good to avoid the 
exception either way since it feels like an exception that exists to handle 
edge cases like buffer overflow rather than something in a normal code path.


> SendFailedException when new consumer is run with SSL
> -
>
> Key: KAFKA-2672
> URL: https://issues.apache.org/jira/browse/KAFKA-2672
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Reporter: Rajini Sivaram
>Assignee: Jason Gustafson
> Fix For: 0.9.1.0
>
>
> When running new consumer with SSL, debug logs show these exceptions every 
> time:
> {quote}
> [2015-10-19 20:57:43,389] DEBUG Fetch failed 
> (org.apache.kafka.clients.consumer.internals.Fetcher)
> org.apache.kafka.clients.consumer.internals.SendFailedException 
> {quote}
> The exception occurs  because send is queued before SSL handshake is 
> complete. I am not sure if the exception is harmless, but it will be good to 
> avoid the exception either way since it feels like an exception that exists 
> to handle edge cases like buffer overflow rather than something in a normal 
> code path.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2643) Run mirror maker tests in ducktape with SSL

2015-11-14 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2643:
---
Fix Version/s: (was: 0.9.0.0)
   0.9.1.0

> Run mirror maker tests in ducktape with SSL
> ---
>
> Key: KAFKA-2643
> URL: https://issues.apache.org/jira/browse/KAFKA-2643
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.9.1.0
>
>
> Mirror maker tests are currently run only with PLAINTEXT. Should be run with 
> SSL as well. This requires console consumer timeout in new consumers which is 
> being added in KAFKA-2603



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2718) Reuse of temporary directories leading to transient unit test failures

2015-11-14 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2718:
---
Affects Version/s: 0.9.0.0
Fix Version/s: (was: 0.9.0.0)

> Reuse of temporary directories leading to transient unit test failures
> --
>
> Key: KAFKA-2718
> URL: https://issues.apache.org/jira/browse/KAFKA-2718
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>
> Stack traces in some of the transient unit test failures indicate that 
> temporary directories used for Zookeeper are being reused.
> {quote}
> kafka.common.TopicExistsException: Topic "topic" already exists.
>   at 
> kafka.admin.AdminUtils$.createOrUpdateTopicPartitionAssignmentPathInZK(AdminUtils.scala:253)
>   at kafka.admin.AdminUtils$.createTopic(AdminUtils.scala:237)
>   at kafka.utils.TestUtils$.createTopic(TestUtils.scala:231)
>   at kafka.api.BaseConsumerTest.setUp(BaseConsumerTest.scala:63)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2624) Truncate warn message logged after truncating partitions

2015-11-14 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-2624.

   Resolution: Fixed
Fix Version/s: (was: 0.9.0.1)
   (was: 0.8.2.2)
   (was: 0.8.2.1)
   (was: 0.8.1.2)
   (was: 0.8.1.1)
   (was: 0.8.2.0)
   (was: 0.10.0.0)
   (was: 0.8.1)
   (was: 0.7.2)
   (was: 0.7.1)
   (was: 0.8.0)
   (was: 0.7)
   (was: 0.6)

Issue resolved by pull request 287
[https://github.com/apache/kafka/pull/287]

> Truncate warn message logged after truncating partitions
> 
>
> Key: KAFKA-2624
> URL: https://issues.apache.org/jira/browse/KAFKA-2624
> Project: Kafka
>  Issue Type: Bug
>  Components: replication
>Affects Versions: 0.6, 0.7, 0.7.1, 0.7.2, 0.8.0, 0.8.1, 0.8.1.1, 0.8.1.2, 
> 0.8.2.0, 0.8.2.1, 0.9.0.0, 0.10.0.0, 0.8.2.2, 0.9.0.1
>Reporter: Francois Visconte
>Assignee: Neha Narkhede
>Priority: Trivial
> Fix For: 0.9.0.0
>
>
> Message warning about truncating is logged after log has been truncated. 
> Consequently, logged message is always of the form
> Replica 13 for partition [topic_x,24] reset its fetch offset from 1234 to 
> current leader 22's latest offset 1234 (kafka.server.ReplicaFetcherThread)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2624) Truncate warn message logged after truncating partitions

2015-11-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005498#comment-15005498
 ] 

ASF GitHub Bot commented on KAFKA-2624:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/287


> Truncate warn message logged after truncating partitions
> 
>
> Key: KAFKA-2624
> URL: https://issues.apache.org/jira/browse/KAFKA-2624
> Project: Kafka
>  Issue Type: Bug
>  Components: replication
>Affects Versions: 0.6, 0.7, 0.7.1, 0.7.2, 0.8.0, 0.8.1, 0.8.1.1, 0.8.1.2, 
> 0.8.2.0, 0.8.2.1, 0.9.0.0, 0.10.0.0, 0.8.2.2, 0.9.0.1
>Reporter: Francois Visconte
>Assignee: Neha Narkhede
>Priority: Trivial
> Fix For: 0.9.0.0
>
>
> Message warning about truncating is logged after log has been truncated. 
> Consequently, logged message is always of the form
> Replica 13 for partition [topic_x,24] reset its fetch offset from 1234 to 
> current leader 22's latest offset 1234 (kafka.server.ReplicaFetcherThread)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2624: Change log message position

2015-11-14 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/287


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka_0.9.0_jdk7 #24

2015-11-14 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-2624; Change log message position

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-1 (docker Ubuntu ubuntu ubuntu1) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/0.9.0^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/0.9.0^{commit} # timeout=10
Checking out Revision 2261763bca19bb1ba578775e5cce98c2450311d9 
(refs/remotes/origin/0.9.0)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 2261763bca19bb1ba578775e5cce98c2450311d9
 > git rev-list e96b7239f3e918e9fc37a8d86433d41e56542137 # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka_0.9.0_jdk7] $ /bin/bash -xe /tmp/hudson6170149512243505141.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:downloadWrapper UP-TO-DATE

BUILD SUCCESSFUL

Total time: 12.062 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka_0.9.0_jdk7] $ /bin/bash -xe /tmp/hudson1959514367589509357.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 --stacktrace clean jarAll 
testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.8/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:clean UP-TO-DATE
:clients:clean
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:jar_core_2_10_5
Building project 'core' with Scala version 2.10.5
:kafka_0.9.0_jdk7:clients:compileJavaNote: 

 uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.

:jar_core_2_10_5 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Could not add entry 
'
 to cache fileHashes.bin 
(
> Corrupted FreeListBlock 651053 found in cache 
> '

* Try:
Run with --info or --debug option to get more log output.

* Exception is:
org.gradle.api.UncheckedIOException: Could not add entry 
'
 to cache fileHashes.bin 
(
at 
org.gradle.cache.internal.btree.BTreePersistentIndexedCache.put(BTreePersistentIndexedCache.java:155)
at 
org.gradle.cache.internal.DefaultMultiProcessSafePersistentIndexedCache$2.run(DefaultMultiProcessSafePersistentIndexedCache.java:51)
at 
org.gradle.cache.internal.DefaultFileLockManager$DefaultFileLock.doWriteAction(DefaultFileLockManager.java:173)
at 
org.gradle.cache.internal.DefaultFileLockManager$DefaultFileLock.writeFile(DefaultFileLockManager.java:163)
at 
org.gradle.cache.internal.DefaultCacheAccess$UnitOfWorkFileAccess.writeFile(DefaultCacheAccess.java:404)
at 
org.gradle.cache.internal.DefaultMultiProcessSafePersistentIndexedCache.put(DefaultMultiProcessSafePersistentIndexedCache.java:49)
at 

Build failed in Jenkins: kafka-trunk-jdk8 #155

2015-11-14 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-2624; Change log message position

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-2 (docker Ubuntu ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 356544caba6448c6ba3bcdb38bea787e1fbc277b 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 356544caba6448c6ba3bcdb38bea787e1fbc277b
 > git rev-list f6d369b70d329af1cc882998aa517926aa9458ab # timeout=10
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson6439605245905064659.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:downloadWrapper UP-TO-DATE

BUILD SUCCESSFUL

Total time: 8.296 secs
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson4451964957239433587.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.8/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:jar_core_2_10_5
Building project 'core' with Scala version 2.10.5
:kafka-trunk-jdk8:clients:compileJava
:jar_core_2_10_5 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 8.903 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2


Build failed in Jenkins: kafka-trunk-jdk7 #822

2015-11-14 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-2624; Change log message position

--
[...truncated 1404 lines...]

kafka.log.LogTest > testParseTopicPartitionNameForMissingPartition PASSED

kafka.log.LogTest > testParseTopicPartitionNameForEmptyName PASSED

kafka.log.LogTest > testOpenDeletesObsoleteFiles PASSED

kafka.log.LogTest > testSizeBasedLogRoll PASSED

kafka.log.LogTest > testTimeBasedLogRollJitter PASSED

kafka.log.LogTest > testParseTopicPartitionName PASSED

kafka.log.LogTest > testTruncateTo PASSED

kafka.log.LogTest > testCleanShutdownFile PASSED

kafka.log.OffsetIndexTest > lookupExtremeCases PASSED

kafka.log.OffsetIndexTest > appendTooMany PASSED

kafka.log.OffsetIndexTest > randomLookupTest PASSED

kafka.log.OffsetIndexTest > testReopen PASSED

kafka.log.OffsetIndexTest > appendOutOfOrder PASSED

kafka.log.OffsetIndexTest > truncate PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.CleanerTest > testBuildOffsetMap PASSED

kafka.log.CleanerTest > testSegmentGrouping PASSED

kafka.log.CleanerTest > testCleanSegmentsWithAbort PASSED

kafka.log.CleanerTest > testSegmentGroupingWithSparseOffsets PASSED

kafka.log.CleanerTest > testRecoveryAfterCrash PASSED

kafka.log.CleanerTest > testLogToClean PASSED

kafka.log.CleanerTest > testCleaningWithDeletes PASSED

kafka.log.CleanerTest > testCleanSegments PASSED

kafka.log.CleanerTest > testCleaningWithUnkeyedMessages PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupStable PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatIllegalGeneration 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testDescribeGroupWrongCoordinator PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupRebalancing 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaderFailureInSyncGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testGenerationIdIncrementsOnRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFromIllegalGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testInvalidGroupId PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesStableGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatDuringRebalanceCausesRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentGroupProtocol PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooLarge PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooSmall PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupEmptyAssignment 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetWithDefaultGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchangedLeaderShouldRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesRebalancingGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFollowerAfterLeader PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testCommitOffsetInAwaitingSync 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testJoinGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupFromUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentProtocolType PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetFromUnknownGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupWrongCoordinator 
PASSED


[jira] [Updated] (KAFKA-2658) Implement SASL/PLAIN

2015-11-14 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2658:
---
Affects Version/s: 0.9.0.0
Fix Version/s: (was: 0.9.0.0)
   0.9.1.0

> Implement SASL/PLAIN
> 
>
> Key: KAFKA-2658
> URL: https://issues.apache.org/jira/browse/KAFKA-2658
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Critical
>
> KAFKA-1686 supports SASL/Kerberos using GSSAPI. We should enable more SASL 
> mechanisms. SASL/PLAIN would enable a simpler use of SASL, which along with 
> SSL provides a secure Kafka that uses username/password for client 
> authentication.
> SASL/PLAIN protocol and its uses are described in 
> [https://tools.ietf.org/html/rfc4616]. It is supported in Java.
> This should be implemented after KAFKA-1686. This task should also hopefully 
> enable simpler unit testing of the SASL code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2658) Implement SASL/PLAIN

2015-11-14 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2658:
---
Fix Version/s: (was: 0.9.1.0)

> Implement SASL/PLAIN
> 
>
> Key: KAFKA-2658
> URL: https://issues.apache.org/jira/browse/KAFKA-2658
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Critical
>
> KAFKA-1686 supports SASL/Kerberos using GSSAPI. We should enable more SASL 
> mechanisms. SASL/PLAIN would enable a simpler use of SASL, which along with 
> SSL provides a secure Kafka that uses username/password for client 
> authentication.
> SASL/PLAIN protocol and its uses are described in 
> [https://tools.ietf.org/html/rfc4616]. It is supported in Java.
> This should be implemented after KAFKA-1686. This task should also hopefully 
> enable simpler unit testing of the SASL code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2500) Make logEndOffset available in the 0.8.3 Consumer

2015-11-14 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2500:
---
Fix Version/s: (was: 0.9.0.0)

> Make logEndOffset available in the 0.8.3 Consumer
> -
>
> Key: KAFKA-2500
> URL: https://issues.apache.org/jira/browse/KAFKA-2500
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: Will Funnell
>Assignee: Jason Gustafson
>Priority: Critical
>
> Originally created in the old consumer here: 
> https://issues.apache.org/jira/browse/KAFKA-1977
> The requirement is to create a snapshot from the Kafka topic but NOT do 
> continual reads after that point. For example you might be creating a backup 
> of the data to a file.
> This ticket covers the addition of the functionality to the new consumer.
> In order to achieve that, a recommended solution by Joel Koshy and Jay Kreps 
> was to expose the high watermark, as maxEndOffset, from the FetchResponse 
> object through to each MessageAndMetadata object in order to be aware when 
> the consumer has reached the end of each partition.
> The submitted patch achieves this by adding the maxEndOffset to the 
> PartitionTopicInfo, which is updated when a new message arrives in the 
> ConsumerFetcherThread and then exposed in MessageAndMetadata.
> See here for discussion:
> http://search-hadoop.com/m/4TaT4TpJy71



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2698) add paused API

2015-11-14 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2698:
---
Fix Version/s: (was: 0.9.0.0)
   0.9.1.0

> add paused API
> --
>
> Key: KAFKA-2698
> URL: https://issues.apache.org/jira/browse/KAFKA-2698
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Onur Karaman
>Priority: Critical
> Fix For: 0.9.1.0
>
>
> org.apache.kafka.clients.consumer.Consumer tends to follow a pattern of 
> having an action API paired with a query API:
> subscribe() has subscription()
> assign() has assignment()
> There's no analogous API for pause.
> Should there be a paused() API returning Set?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2000) Delete consumer offsets from kafka once the topic is deleted

2015-11-14 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2000:
---
Fix Version/s: (was: 0.9.0.0)
   0.9.1.0

> Delete consumer offsets from kafka once the topic is deleted
> 
>
> Key: KAFKA-2000
> URL: https://issues.apache.org/jira/browse/KAFKA-2000
> Project: Kafka
>  Issue Type: Bug
>Reporter: Sriharsha Chintalapani
>Assignee: Sriharsha Chintalapani
>  Labels: newbie++
> Fix For: 0.9.1.0
>
> Attachments: KAFKA-2000.patch, KAFKA-2000_2015-05-03_10:39:11.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2103) kafka.producer.AsyncProducerTest failure.

2015-11-14 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2103:
---
Affects Version/s: 0.9.0.0
Fix Version/s: (was: 0.9.0.0)

> kafka.producer.AsyncProducerTest failure.
> -
>
> Key: KAFKA-2103
> URL: https://issues.apache.org/jira/browse/KAFKA-2103
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Ewen Cheslack-Postava
> Attachments: KAFKA-2103.patch
>
>
> I saw this test consistently failing on trunk.
> The recent changes are KAFKA-2099, KAFKA-1926, KAFKA-1809.
> kafka.producer.AsyncProducerTest > testNoBroker FAILED
> org.scalatest.junit.JUnitTestFailedError: Should fail with 
> FailedToSendMessageException
> at 
> org.scalatest.junit.AssertionsForJUnit$class.newAssertionFailedException(AssertionsForJUnit.scala:101)
> at 
> org.scalatest.junit.JUnit3Suite.newAssertionFailedException(JUnit3Suite.scala:149)
> at org.scalatest.Assertions$class.fail(Assertions.scala:711)
> at org.scalatest.junit.JUnit3Suite.fail(JUnit3Suite.scala:149)
> at 
> kafka.producer.AsyncProducerTest.testNoBroker(AsyncProducerTest.scala:300)
> kafka.producer.AsyncProducerTest > testIncompatibleEncoder PASSED
> kafka.producer.AsyncProducerTest > testRandomPartitioner PASSED
> kafka.producer.AsyncProducerTest > testFailedSendRetryLogic FAILED
> kafka.common.FailedToSendMessageException: Failed to send messages after 
> 3 tries.
> at 
> kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:91)
> at 
> kafka.producer.AsyncProducerTest.testFailedSendRetryLogic(AsyncProducerTest.scala:415)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2365) Copycat checklist

2015-11-14 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2365:
---
Affects Version/s: 0.9.0.0
Fix Version/s: (was: 0.9.0.0)

> Copycat checklist
> -
>
> Key: KAFKA-2365
> URL: https://issues.apache.org/jira/browse/KAFKA-2365
> Project: Kafka
>  Issue Type: New Feature
>  Components: copycat
>Affects Versions: 0.9.0.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>  Labels: feature
>
> This covers the development plan for 
> [KIP-26|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=58851767].
>  There are a number of features that can be developed in sequence to make 
> incremental progress, and often in parallel:
> * Initial patch - connector API and core implementation
> * Runtime data API
> * Standalone CLI
> * REST API
> * Distributed copycat - CLI
> * Distributed copycat - coordinator
> * Distributed copycat - config storage
> * Distributed copycat - offset storage
> * Log/file connector (sample source/sink connector)
> * Elasticsearch sink connector (sample sink connector for full log -> Kafka 
> -> Elasticsearch sample pipeline)
> * Copycat metrics
> * System tests (including connector tests)
> * Mirrormaker connector
> * Copycat documentation
> This is an initial list, but it might need refinement to allow for more 
> incremental progress and may be missing features we find we want before the 
> initial release.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2376) Add Copycat metrics

2015-11-14 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2376:
---
Affects Version/s: 0.9.0.0
Fix Version/s: (was: 0.9.0.0)

> Add Copycat metrics
> ---
>
> Key: KAFKA-2376
> URL: https://issues.apache.org/jira/browse/KAFKA-2376
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Affects Versions: 0.9.0.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>
> Copycat needs good metrics for monitoring since that will be the primary 
> insight into the health of connectors as they copy data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2370) Add pause/unpause connector support

2015-11-14 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2370:
---
Fix Version/s: (was: 0.9.0.0)
   0.9.1.0

> Add pause/unpause connector support
> ---
>
> Key: KAFKA-2370
> URL: https://issues.apache.org/jira/browse/KAFKA-2370
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.1.0
>
>
> It will sometimes be useful to pause/unpause connectors. For example, if you 
> know planned maintenance will occur on the source/destination system, it 
> would make sense to pause and then resume (but not delete and then restore), 
> a connector.
> This likely requires support in all Coordinator implementations 
> (standalone/distributed) to trigger the events.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2642) Run replication tests in ducktape with SSL for clients

2015-11-14 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2642:
---
Affects Version/s: 0.9.0.0
Fix Version/s: (was: 0.9.0.0)
   0.9.1.0

> Run replication tests in ducktape with SSL for clients
> --
>
> Key: KAFKA-2642
> URL: https://issues.apache.org/jira/browse/KAFKA-2642
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.9.1.0
>
>
> Under KAFKA-2581, replication tests were parametrized to run with SSL for 
> interbroker communication, but not for clients. When KAFKA-2603 is committed, 
> the tests should be able to use SSL for clients as well,



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2667) Copycat KafkaBasedLogTest.testSendAndReadToEnd transient failure

2015-11-14 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2667:
---
Fix Version/s: (was: 0.9.0.0)
   0.9.1.0

> Copycat KafkaBasedLogTest.testSendAndReadToEnd transient failure
> 
>
> Key: KAFKA-2667
> URL: https://issues.apache.org/jira/browse/KAFKA-2667
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Jason Gustafson
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.1.0
>
>
> Seen in recent builds:
> {code}
> org.apache.kafka.copycat.util.KafkaBasedLogTest > testSendAndReadToEnd FAILED
> java.lang.AssertionError
> at org.junit.Assert.fail(Assert.java:86)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at org.junit.Assert.assertTrue(Assert.java:52)
> at 
> org.apache.kafka.copycat.util.KafkaBasedLogTest.testSendAndReadToEnd(KafkaBasedLogTest.java:335)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1326) New consumer checklist

2015-11-14 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-1326:
---
Fix Version/s: (was: 0.9.0.0)
   0.9.1.0

> New consumer checklist
> --
>
> Key: KAFKA-1326
> URL: https://issues.apache.org/jira/browse/KAFKA-1326
> Project: Kafka
>  Issue Type: New Feature
>  Components: consumer
>Affects Versions: 0.8.2.1
>Reporter: Neha Narkhede
>Assignee: Neha Narkhede
>  Labels: feature
> Fix For: 0.9.1.0
>
>
> We will use this JIRA to track the list of issues to resolve to get a working 
> new consumer client. The consumer client can work in phases -
> 1. Add new consumer APIs and configs
> 2. Refactor Sender. We will need to use some common APIs from Sender.java 
> (https://issues.apache.org/jira/browse/KAFKA-1316)
> 3. Add metadata fetch and refresh functionality to the consumer (This will 
> require https://issues.apache.org/jira/browse/KAFKA-1316)
> 4. Add functionality to support subscribe(TopicPartition...partitions). This 
> will add SimpleConsumer functionality to the new consumer. This does not 
> include any group management related work.
> 5. Add ability to commit offsets to Kafka. This will include adding 
> functionality to the commit()/commitAsync()/committed() APIs. This still does 
> not include any group management related work.
> 6. Add functionality to the offsetsBeforeTime() API.
> 7. Add consumer co-ordinator election to the server. This will only add a new 
> module for the consumer co-ordinator, but not necessarily all the logic to do 
> group management. 
> At this point, we will have a fully functional standalone consumer and a 
> server side co-ordinator module. This will be a good time to start adding 
> group management functionality to the server and consumer.
> 8. Add failure detection capability to the consumer when group management is 
> used. This will not include any rebalancing logic, just the ability to detect 
> failures using session.timeout.ms.
> 9. Add rebalancing logic to the server and consumer. This will be a tricky 
> and potentially large change since it will involve implementing the group 
> management protocol.
> 10. Add system tests for the new consumer
> 11. Add metrics 
> 12. Convert mirror maker to use the new consumer.
> 13. Convert perf test to use the new consumer
> 14. Performance testing and analysis.
> 15. Review and fine tune log4j logging



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2688) Avoid forcing reload of `Configuration`

2015-11-14 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2688:
---
Fix Version/s: (was: 0.9.0.0)
   0.9.1.0

> Avoid forcing reload of `Configuration`
> ---
>
> Key: KAFKA-2688
> URL: https://issues.apache.org/jira/browse/KAFKA-2688
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Reporter: Ismael Juma
>Assignee: Flavio Junqueira
>Priority: Critical
> Fix For: 0.9.1.0
>
>
> We currently call `Configuration.setConfiguration(null)` from a couple of 
> places in our codebase (`Login` and `JaasUtils`) to force `Configuration` to 
> be reloaded. If this code is removed, some tests can fail depending on the 
> test execution order.
> Ideally we would not need to call `setConfiguration(null)` outside of tests. 
> Investigate if this is possible. If not, we should at least ensure that 
> reloads are done in a safe way within our codebase (perhaps using a lock).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2145) An option to add topic owners.

2015-11-14 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2145:
---
Fix Version/s: (was: 0.9.0.0)
   0.9.1.0

> An option to add topic owners. 
> ---
>
> Key: KAFKA-2145
> URL: https://issues.apache.org/jira/browse/KAFKA-2145
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Parth Brahmbhatt
>Assignee: Parth Brahmbhatt
> Fix For: 0.9.1.0
>
>
> We need to expose a way so users can identify users/groups that share 
> ownership of topic. We discussed adding this as part of 
> https://issues.apache.org/jira/browse/KAFKA-2035 and agreed that it will be 
> simpler to add owner as a logconfig. 
> The owner field can be used for auditing and also by authorization layer to 
> grant access without having to explicitly configure acls. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2617) Move protocol field default values to Protocol

2015-11-14 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2617:
---
Fix Version/s: (was: 0.9.0.0)
   0.9.1.0

> Move protocol field default values to Protocol
> --
>
> Key: KAFKA-2617
> URL: https://issues.apache.org/jira/browse/KAFKA-2617
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Jakub Nowak
>  Labels: newbie
> Fix For: 0.9.1.0
>
>
> Right now the default values are scattered in the Request / Response classes, 
> and some duplicates already exists like JoinGroupRequest.UNKNOWN_CONSUMER_ID 
> and OffsetCommitRequest.DEFAULT_CONSUMER_ID. We would like to move all 
> default values into org.apache.kafka.common.protocol.Protocol since 
> org.apache.kafka.common.requests depends on org.apache.kafka.common.protocol 
> anyways.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2619) Kafka tools dont allow override of some properties using config files

2015-11-14 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2619:
---
Fix Version/s: (was: 0.9.0.0)

> Kafka tools dont allow override of some properties using config files
> -
>
> Key: KAFKA-2619
> URL: https://issues.apache.org/jira/browse/KAFKA-2619
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>
> [~ewencp] pointed out in the PR review of KAFKA-2581 that the parsing of 
> config files in performance tools etc. are not quite as expected. Default 
> properties are added to the configuration after the parsing of config files 
> and hence some properties cannot be overridden in config files. This should 
> be fixed in all tools to keep them consistent.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2483) Add support for batch/scheduled Copycat connectors

2015-11-14 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2483:
---
Fix Version/s: (was: 0.9.0.0)
   0.9.1.0

> Add support for batch/scheduled Copycat connectors
> --
>
> Key: KAFKA-2483
> URL: https://issues.apache.org/jira/browse/KAFKA-2483
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.1.0
>
>
> A few connectors may not perform well if run continuously; for example, HDFS 
> may not handle a task holding a file open for very long periods of time well.
> These connectors will work better if they can schedule themselves to be 
> executed periodically. Note that this cannot currently be implemented by the 
> connector itself because in sink connectors get data delivered to them as it 
> streams in. However, it might be possible to make connectors handle this 
> themselves given the combination of KAFKA-2481 and KAFKA-2482 would make it 
> possible, if inconvenient, to implement this in the task itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2584) SecurityProtocol enum validation should be removed or relaxed for non-config usages

2015-11-14 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2584:
---
Affects Version/s: 0.9.0.0
Fix Version/s: (was: 0.9.0.0)

> SecurityProtocol enum validation should be removed or relaxed for non-config 
> usages
> ---
>
> Key: KAFKA-2584
> URL: https://issues.apache.org/jira/browse/KAFKA-2584
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Joel Koshy
>
> While deploying SSL to our clusters, we had to roll back due to another 
> compatibility issue similar to what we mentioned in passing in other 
> threads/KIP hangouts. i.e., picking up jars between official releases. 
> Fortunately, there is an easy server-side hot-fix we can do internally to 
> work around it. However, I would classify the issue below as a bug since 
> there is little point in doing endpoint type validation (except for config 
> validation).
> What happened here is that some (old) consumers (that do not care about SSL) 
> picked up a Kafka jar that understood multiple endpoints but did not have the 
> SSL feature. The rebalance fails because while creating the Broker objects we 
> are forced to validate all the endpoints.
> Yes the old consumer is going away, but this would affect tools as well. The 
> same issue could also happen on the brokers if we were to upgrade them to 
> include (say) a Kerberos endpoint. So the old brokers would not be able to 
> read the registration of newly upgraded brokers. Well you could get around 
> that by doing two rounds of deployment (one to get the new code, and another 
> to expose the Kerberos endpoint) but that’s inconvenient and I think 
> unnecessary. Although validation makes sense for configs, I think the current 
> validate everywhere is overkill. (i.e., an old consumer, tool or broker 
> should not complain because another broker can talk more protocols.)
> {noformat}
> kafka.common.KafkaException: Failed to parse the broker info from zookeeper: 
> {"jmx_port":-1,"timestamp":"1442952770627","endpoints":["PLAINTEXT://:","SSL://:"],"host”:”","version":2,"port”:}
> at kafka.cluster.Broker$.createBroker(Broker.scala:61)
> at kafka.utils.ZkUtils$$anonfun$getCluster$1.apply(ZkUtils.scala:520)
> at kafka.utils.ZkUtils$$anonfun$getCluster$1.apply(ZkUtils.scala:518)
> at scala.collection.Iterator$class.foreach(Iterator.scala:727)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
> at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> at kafka.utils.ZkUtils$.getCluster(ZkUtils.scala:518)
> at kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener
> ...
> Caused by: java.lang.IllegalArgumentException: No enum constant 
> org.apache.kafka.common.protocol.SecurityProtocol.SSL
> at java.lang.Enum.valueOf(Enum.java:238)
> at 
> org.apache.kafka.common.protocol.SecurityProtocol.valueOf(SecurityProtocol.java:24)
> at kafka.cluster.EndPoint$.createEndPoint(EndPoint.scala:48)
> at kafka.cluster.Broker$$anonfun$1.apply(Broker.scala:74)
> at kafka.cluster.Broker$$anonfun$1.apply(Broker.scala:73)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at scala.collection.immutable.List.foreach(List.scala:318)
> at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
> at scala.collection.AbstractTraversable.map(Traversable.scala:105)
> at kafka.cluster.Broker$.createBroker(Broker.scala:73)
> ... 70 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2608) Recommend kafka_2.11 for 0.9.0.0 on the website

2015-11-14 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2608:
---
Affects Version/s: 0.9.0.0
Fix Version/s: (was: 0.9.0.0)

> Recommend kafka_2.11 for 0.9.0.0 on the website
> ---
>
> Key: KAFKA-2608
> URL: https://issues.apache.org/jira/browse/KAFKA-2608
> Project: Kafka
>  Issue Type: Task
>  Components: website
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>
> Scala 2.11 has been out for 17 months and Scala 2.10 is not being updated 
> anymore. We should recommend the Scala 2.11 version of 0.9.0.0 on the website:
> http://kafka.apache.org/downloads.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2692) Add ducktape tests for SASL/Kerberos

2015-11-14 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2692:
---
Fix Version/s: (was: 0.9.0.0)
   0.9.1.0

> Add ducktape tests for SASL/Kerberos
> 
>
> Key: KAFKA-2692
> URL: https://issues.apache.org/jira/browse/KAFKA-2692
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Rajini Sivaram
> Fix For: 0.9.1.0
>
>
> KAFKA-2644 runs replication tests and benchmarks with SASL/Kerberos using 
> MiniKdc. Additional Kerberos-specific tests are required, particularly to 
> test error scenarios. This may require replacing MiniKdc with full KDC.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2732) Add support for consumer test with ZK Auth, SASL and SSL

2015-11-14 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2732:
---
Fix Version/s: (was: 0.9.0.0)
   0.9.1.0

> Add support for consumer test with ZK Auth, SASL and SSL
> 
>
> Key: KAFKA-2732
> URL: https://issues.apache.org/jira/browse/KAFKA-2732
> Project: Kafka
>  Issue Type: Test
>  Components: security
>Affects Versions: 0.9.0.0
>Reporter: Flavio Junqueira
>Assignee: Flavio Junqueira
> Fix For: 0.9.1.0
>
>
> Extend SaslSslConsumerTest to use ZK Auth and add support to enable it to 
> work properly. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2789) Update upgrade system test to catch issue reported in KAFKA-2756

2015-11-14 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2789:
---
Fix Version/s: (was: 0.9.0.0)
   0.9.1.0

> Update upgrade system test to catch issue reported in KAFKA-2756
> 
>
> Key: KAFKA-2789
> URL: https://issues.apache.org/jira/browse/KAFKA-2789
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
> Fix For: 0.9.1.0
>
>
> It's not good that the upgrade system test didn't catch KAFKA-2756
> Hypothesis:
> We think KAFKA-2756 would manifest as failed fetch requests, and replicas 
> falling out of ISR set.
> However, the test only validates that acked messages are available for 
> consumption. It may be that some messages simply were not acked, but this is 
> not currently a failure condition for the test
> Proposed update:
> - Since every shutdown is a clean shutdown, and `min.insync.isr = 2`, we 
> expect that every message should be acked in this test. Update validation to 
> confirm this.
> - Depending on how the leader moves during the rolling bounces, it might be 
> still be possible for every message to be acked even if replicas fall out of 
> isr. So we should also verify after each bounce that the size of the isr set 
> goes back to 3 in a short period of time.
> When making this test update, we should check that the test fails if we 
> remove the fix to KAFKA-2756



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2787) Refactor gradle build

2015-11-14 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2787:
---
Fix Version/s: (was: 0.9.0.0)
   0.9.1.0

> Refactor gradle build
> -
>
> Key: KAFKA-2787
> URL: https://issues.apache.org/jira/browse/KAFKA-2787
> Project: Kafka
>  Issue Type: Task
>  Components: build
>Affects Versions: 0.8.2.2
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.9.1.0
>
>
> The build files are quite large with a lot of duplication and overlap. This 
> could lead to mistakes, reduce readability and functionality, and hinder 
> future changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-313) Add JSON/CSV output and looping options to ConsumerGroupCommand

2015-11-14 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-313:
--
Affects Version/s: 0.9.0.0
Fix Version/s: (was: 0.9.0.0)

> Add JSON/CSV output and looping options to ConsumerGroupCommand
> ---
>
> Key: KAFKA-313
> URL: https://issues.apache.org/jira/browse/KAFKA-313
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Dave DeMaagd
>Assignee: Ashish K Singh
>Priority: Minor
>  Labels: newbie, patch
> Attachments: KAFKA-313-2012032200.diff, KAFKA-313.1.patch, 
> KAFKA-313.patch, KAFKA-313_2015-02-23_18:11:32.patch, 
> KAFKA-313_2015-06-24_11:14:24.patch, KAFKA-313_2015-08-05_15:37:32.patch, 
> KAFKA-313_2015-08-05_15:43:00.patch, KAFKA-313_2015-08-10_12:58:38.patch, 
> KAFKA-313_2015-08-12_14:21:32.patch
>
>
> Adds:
> * '--loop N' - causes the program to loop forever, sleeping for up to N 
> seconds between loops (loop time minus collection time, unless that's less 
> than 0, at which point it will just run again immediately)
> * '--asjson' - display as a JSON string instead of the more human readable 
> output format.
> Neither of the above  depend on each other (you can loop in the human 
> readable output, or do a single shot execution with JSON output).  Existing 
> behavior/output maintained if neither of the above are used.  Diff Attached.
> Impacted files:
> core/src/main/scala/kafka/admin/ConsumerGroupCommand.scala



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2146) adding partition did not find the correct startIndex

2015-11-14 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2146:
---
Fix Version/s: (was: 0.9.0.0)

> adding partition did not find the correct startIndex 
> -
>
> Key: KAFKA-2146
> URL: https://issues.apache.org/jira/browse/KAFKA-2146
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 0.8.2.0
>Reporter: chenshangan
>Assignee: chenshangan
>Priority: Minor
> Attachments: KAFKA-2146.2.patch, KAFKA-2146.patch
>
>
> TopicCommand provide a tool to add partitions for existing topics. It try to 
> find the startIndex from existing partitions. There's a minor flaw in this 
> process, it try to use the first partition fetched from zookeeper as the 
> start partition, and use the first replica id in this partition as the 
> startIndex.
> One thing, the first partition fetched from zookeeper is not necessary to be 
> the start partition. As partition id begin from zero, we should use partition 
> with id zero as the start partition.
> The other, broker id does not necessary begin from 0, so the startIndex is 
> not necessary to be the first replica id in the start partition. 
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2422) Allow copycat connector plugins to be aliased to simpler names

2015-11-14 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2422:
---
Affects Version/s: 0.9.0.0
Fix Version/s: (was: 0.9.0.0)

> Allow copycat connector plugins to be aliased to simpler names
> --
>
> Key: KAFKA-2422
> URL: https://issues.apache.org/jira/browse/KAFKA-2422
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Affects Versions: 0.9.0.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>Priority: Minor
>
> Configurations of connectors can get quite verbose when you have to specify 
> the full class name, e.g. 
> connector.class=org.apache.kafka.copycat.file.FileStreamSinkConnector
> It would be nice to allow connector classes to provide shorter aliases, e.g. 
> something like "file-sink", to make this config less verbose. Flume does 
> this, so we can use it as an example.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2542) Reuse Throttler code

2015-11-14 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2542:
---
Affects Version/s: 0.9.0.0
Fix Version/s: (was: 0.9.0.0)

> Reuse Throttler code
> 
>
> Key: KAFKA-2542
> URL: https://issues.apache.org/jira/browse/KAFKA-2542
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
>Priority: Minor
>
> ThroughputThrottler.java and Throttler.scala are quite similar. It would be 
> better to remove ThroughputThrottler, and place Throttler in 
> clients/o.a.k.common so that it can be reused.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] 0.9.0.0 Candiate 2

2015-11-14 Thread Harsha
+1 (binding)
1. ran unit tests 

On Fri, Nov 13, 2015, at 06:28 PM, Jun Rao wrote:
> This is the second candidate for release of Apache Kafka 0.9.0.0. This a
> major release that includes (1) authentication (through SSL and SASL) and
> authorization, (2) a new java consumer, (3) a Kafka connect framework for
> data ingestion and egression, and (4) quotas. Since this is a major
> release, we will give people a bit more time for trying this out.
> 
> Release Notes for the 0.9.0.0 release
> https://people.apache.org/~junrao/kafka-0.9.0.0-candidate2/RELEASE_NOTES.html
> 
> *** Please download, test and vote by Thursday, Nov. 19, 7pm PT
> 
> Kafka's KEYS file containing PGP keys we use to sign the release:
> http://kafka.apache.org/KEYS in addition to the md5, sha1
> and sha2 (SHA256) checksum.
> 
> * Release artifacts to be voted upon (source and binary):
> https://people.apache.org/~junrao/kafka-0.9.0.0-candidate2/
> 
> * Maven artifacts to be voted upon prior to release:
> https://repository.apache.org/content/groups/staging/
> 
> * scala-doc
> https://people.apache.org/~junrao/kafka-0.9.0.0-candidate2/scaladoc/
> 
> * java-doc
> https://people.apache.org/~junrao/kafka-0.9.0.0-candidate2/javadoc/
> 
> * The tag to be voted upon (off the 0.9.0 branch) is the 0.9.0.0 tag
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=fad4826f4a58308a29a96ef706bb52b1a9a29e13
> 
> * Documentation
> http://kafka.apache.org/090/documentation.html
> 
> /***
> 
> Thanks,
> 
> Jun


Re: [VOTE] 0.9.0.0 Candiate 2

2015-11-14 Thread Harsha
 +1 (binding)
 1. ran unit tests 
 2. ran simple tests (producer, consumer) with sasl, ssl and non-secure.

Thanks,
Harsha

On Sat, Nov 14, 2015, at 02:42 PM, Harsha wrote:
> +1 (binding)
> 1. ran unit tests 
> 
> On Fri, Nov 13, 2015, at 06:28 PM, Jun Rao wrote:
> > This is the second candidate for release of Apache Kafka 0.9.0.0. This a
> > major release that includes (1) authentication (through SSL and SASL) and
> > authorization, (2) a new java consumer, (3) a Kafka connect framework for
> > data ingestion and egression, and (4) quotas. Since this is a major
> > release, we will give people a bit more time for trying this out.
> > 
> > Release Notes for the 0.9.0.0 release
> > https://people.apache.org/~junrao/kafka-0.9.0.0-candidate2/RELEASE_NOTES.html
> > 
> > *** Please download, test and vote by Thursday, Nov. 19, 7pm PT
> > 
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> > http://kafka.apache.org/KEYS in addition to the md5, sha1
> > and sha2 (SHA256) checksum.
> > 
> > * Release artifacts to be voted upon (source and binary):
> > https://people.apache.org/~junrao/kafka-0.9.0.0-candidate2/
> > 
> > * Maven artifacts to be voted upon prior to release:
> > https://repository.apache.org/content/groups/staging/
> > 
> > * scala-doc
> > https://people.apache.org/~junrao/kafka-0.9.0.0-candidate2/scaladoc/
> > 
> > * java-doc
> > https://people.apache.org/~junrao/kafka-0.9.0.0-candidate2/javadoc/
> > 
> > * The tag to be voted upon (off the 0.9.0 branch) is the 0.9.0.0 tag
> > https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=fad4826f4a58308a29a96ef706bb52b1a9a29e13
> > 
> > * Documentation
> > http://kafka.apache.org/090/documentation.html
> > 
> > /***
> > 
> > Thanks,
> > 
> > Jun


[jira] [Resolved] (KAFKA-2688) Avoid forcing reload of `Configuration`

2015-11-14 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-2688.

   Resolution: Fixed
Fix Version/s: (was: 0.9.1.0)
   0.9.0.0

This was actually fixed by a commit from [~rsivaram].

> Avoid forcing reload of `Configuration`
> ---
>
> Key: KAFKA-2688
> URL: https://issues.apache.org/jira/browse/KAFKA-2688
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> We currently call `Configuration.setConfiguration(null)` from a couple of 
> places in our codebase (`Login` and `JaasUtils`) to force `Configuration` to 
> be reloaded. If this code is removed, some tests can fail depending on the 
> test execution order.
> Ideally we would not need to call `setConfiguration(null)` outside of tests. 
> Investigate if this is possible. If not, we should at least ensure that 
> reloads are done in a safe way within our codebase (perhaps using a lock).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2688) Avoid forcing reload of `Configuration`

2015-11-14 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma reassigned KAFKA-2688:
--

Assignee: Ismael Juma  (was: Flavio Junqueira)

> Avoid forcing reload of `Configuration`
> ---
>
> Key: KAFKA-2688
> URL: https://issues.apache.org/jira/browse/KAFKA-2688
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>Priority: Critical
> Fix For: 0.9.1.0
>
>
> We currently call `Configuration.setConfiguration(null)` from a couple of 
> places in our codebase (`Login` and `JaasUtils`) to force `Configuration` to 
> be reloaded. If this code is removed, some tests can fail depending on the 
> test execution order.
> Ideally we would not need to call `setConfiguration(null)` outside of tests. 
> Investigate if this is possible. If not, we should at least ensure that 
> reloads are done in a safe way within our codebase (perhaps using a lock).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)