[jira] [Commented] (KAFKA-2701) Consumer that uses Zookeeper to connect to Kafka broker, receives messages of server that is secured with SSL+Authentication

2015-10-28 Thread Mohammad Abbasi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14978477#comment-14978477
 ] 

Mohammad Abbasi commented on KAFKA-2701:


Hi [~ijuma], thanks for reply,
No I didn't, because in 
https://cwiki.apache.org/confluence/display/KAFKA/Deploying+SSL+for+Kafka it's 
said that:"Both PLAINTEXT and SSL ports are necessary if SSL is not enabled for 
inter-broker communication."
So does inter-broker communication matters in this problem?
I will test it with disabling PLAINTEXT and using SSL in inter-broker 
communication.

> Consumer that uses Zookeeper to connect to Kafka broker, receives messages of 
> server that is secured with SSL+Authentication
> 
>
> Key: KAFKA-2701
> URL: https://issues.apache.org/jira/browse/KAFKA-2701
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Mohammad Abbasi
>
> I have a secured Kafka server with SSL+Authentication. Secured and 
> authenticated consumers and producers work OK with this server and 
> non-configured with SSL consumers and producer cannot send messages to or 
> receive messages from secured Kafka server when they are connected 
> "directly"(I mean not through the Zookeeper) to the server. 
> But when non-authenticated consumer connects through Zookeeper to the broker, 
> receives message from secured Kafka server. Is this a bug? or if it's OK, why 
> non-authenticated consumer can receive messages from Kafka server which 
> requires authentication through SSL?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-2701) Consumer that uses Zookeeper to connect to Kafka broker, receives messages of server that is secured with SSL+Authentication

2015-10-28 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14978486#comment-14978486
 ] 

Ismael Juma edited comment on KAFKA-2701 at 10/28/15 2:20 PM:
--

That is correct, if you want a fully secure broker, you need to use SSL for 
inter-broker communication and disable the PLAINTEXT port. If the PLAINTEXT 
port is not disabled, it can be used by anyone (including non-authenticated 
consumers).


was (Author: ijuma):
That is correct, if you want a fully secure broker, you need to use SSL for 
inter-broker communication and disable the PLAINTEXT port. If the PLAINTEXT 
port is not disabled, it can be used by anyone.

> Consumer that uses Zookeeper to connect to Kafka broker, receives messages of 
> server that is secured with SSL+Authentication
> 
>
> Key: KAFKA-2701
> URL: https://issues.apache.org/jira/browse/KAFKA-2701
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Mohammad Abbasi
>
> I have a secured Kafka server with SSL+Authentication. Secured and 
> authenticated consumers and producers work OK with this server and 
> non-configured with SSL consumers and producer cannot send messages to or 
> receive messages from secured Kafka server when they are connected 
> "directly"(I mean not through the Zookeeper) to the server. 
> But when non-authenticated consumer connects through Zookeeper to the broker, 
> receives message from secured Kafka server. Is this a bug? or if it's OK, why 
> non-authenticated consumer can receive messages from Kafka server which 
> requires authentication through SSL?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2691) Improve handling of authorization failure during metadata refresh

2015-10-28 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14978605#comment-14978605
 ] 

Jason Gustafson commented on KAFKA-2691:


[~ijuma] To clarify slightly, the indefinite blocking for the consumer occurs 
when fetching consumer/group metadata, right? I went ahead and patched this in 
KAFKA-2683, so hopefully this is not an issue anymore. However, we still have 
the problem that topic metadata authorization errors are only caught and logged 
by NetworkClient. 

> Improve handling of authorization failure during metadata refresh
> -
>
> Key: KAFKA-2691
> URL: https://issues.apache.org/jira/browse/KAFKA-2691
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
> Fix For: 0.9.0.0
>
>
> There are two problems, one more severe than the other:
> 1. The consumer blocks indefinitely if there is non-transient authorization 
> failure during metadata refresh due to KAFKA-2391
> 2. We get a TimeoutException instead of an AuthorizationException in the 
> producer for the same case
> If the fix for KAFKA-2391 is to add a timeout, then we will have issue `2` in 
> both producer and consumer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2701) Consumer that uses Zookeeper to connect to Kafka broker, receives messages of server that is secured with SSL+Authentication

2015-10-28 Thread Mohammad Abbasi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14978494#comment-14978494
 ] 

Mohammad Abbasi commented on KAFKA-2701:


Yeah, it's correct, with disabling PLAINTEXT and using SSL in inter-broker 
communication problem solved. Thank you for guidance.

> Consumer that uses Zookeeper to connect to Kafka broker, receives messages of 
> server that is secured with SSL+Authentication
> 
>
> Key: KAFKA-2701
> URL: https://issues.apache.org/jira/browse/KAFKA-2701
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Mohammad Abbasi
>
> I have a secured Kafka server with SSL+Authentication. Secured and 
> authenticated consumers and producers work OK with this server and 
> non-configured with SSL consumers and producer cannot send messages to or 
> receive messages from secured Kafka server when they are connected 
> "directly"(I mean not through the Zookeeper) to the server. 
> But when non-authenticated consumer connects through Zookeeper to the broker, 
> receives message from secured Kafka server. Is this a bug? or if it's OK, why 
> non-authenticated consumer can receive messages from Kafka server which 
> requires authentication through SSL?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2675; SASL/Kerberos follow up

2015-10-28 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/376

KAFKA-2675; SASL/Kerberos follow up



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka KAFKA-2675-sasl-kerberos-follow-up

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/376.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #376


commit 2e928f4705d731aa2754686651a65b0713d840b8
Author: Ismael Juma 
Date:   2015-10-23T08:30:46Z

Fix handling of `kafka_jaas.conf` not found in `SaslTestHarness`

commit 6aed268b3ff772bebb38c8f242ad63cca6ba83b6
Author: Ismael Juma 
Date:   2015-10-23T08:35:29Z

Remove unnecessary code in `BaseProducerSendTest`

In most cases, `producer` can never be null. In two
cases, there are multiple producers and the
`var producer` doesn't make sense.

commit f0cc13190c0374cf72f76040a52a66bace950ef7
Author: Ismael Juma 
Date:   2015-10-23T09:09:05Z

Move some tests from `BaseConsumerTest` to `PlaintextConsumerTest` in order 
to reduce build times

commit 8f1aa28fda00820b14f519fd7df457ef7804c634
Author: Ismael Juma 
Date:   2015-10-23T09:10:39Z

Make `Login` thread a daemon thread

This way, it won't prevent shutdown if `close` is not called on
`Consumer` or `Producer`.

commit f9a3e4e1c918c92771403cabca089092c36c1638
Author: Ismael Juma 
Date:   2015-10-23T16:59:43Z

Rename `kafka.security.auth.to.local` to 
`sasl.kerberos.principal.to.local.rules`

Also improve wording for `SaslConfigs` docs.

commit ac58906f4c446941c43f193aaee45366dfd50950
Author: Ismael Juma 
Date:   2015-10-26T09:52:20Z

Remove unused `SASL_KAFKA_SERVER_REALM` property

commit c68554f4001979ca9283f007e20fe599c1eb85fa
Author: Ismael Juma 
Date:   2015-10-26T12:59:38Z

Remove forced reload of `Configuration` from `Login` and set JAAS property 
before starting `MiniKdc`

commit 503e2662a63bd39a1602ed73cba9b2c8fe4af55f
Author: Ismael Juma 
Date:   2015-10-27T21:22:59Z

Fix `IntegrationTestHarness` to set security configs correctly

commit 133076603671c50c4ab820f754c6ebaaedc58f15
Author: Ismael Juma 
Date:   2015-10-27T23:27:49Z

Improve logging in `ControllerChannelManager` by using `brokerNode` instead 
of `toBroker`

commit 7dd7eeff4748b28f31010196c8fbb2cb65d0da0e
Author: Ismael Juma 
Date:   2015-10-28T14:36:30Z

Introduce `LoginManager.closeAll()` and use it in `SaslTestHarness`

This is necessary to avoid authentication failures when consumers,
producers or brokers are leaked during tests.

commit 0f31db82a07b4be77cd2d95cf9d2f9eecd1343ee
Author: Ismael Juma 
Date:   2015-10-28T14:37:42Z

Improve exception handling in Sasl authenticators: avoid excessive 
exception chaining




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2675) SASL/Kerberos follow-up

2015-10-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14978531#comment-14978531
 ] 

ASF GitHub Bot commented on KAFKA-2675:
---

GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/376

KAFKA-2675; SASL/Kerberos follow up



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka KAFKA-2675-sasl-kerberos-follow-up

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/376.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #376


commit 2e928f4705d731aa2754686651a65b0713d840b8
Author: Ismael Juma 
Date:   2015-10-23T08:30:46Z

Fix handling of `kafka_jaas.conf` not found in `SaslTestHarness`

commit 6aed268b3ff772bebb38c8f242ad63cca6ba83b6
Author: Ismael Juma 
Date:   2015-10-23T08:35:29Z

Remove unnecessary code in `BaseProducerSendTest`

In most cases, `producer` can never be null. In two
cases, there are multiple producers and the
`var producer` doesn't make sense.

commit f0cc13190c0374cf72f76040a52a66bace950ef7
Author: Ismael Juma 
Date:   2015-10-23T09:09:05Z

Move some tests from `BaseConsumerTest` to `PlaintextConsumerTest` in order 
to reduce build times

commit 8f1aa28fda00820b14f519fd7df457ef7804c634
Author: Ismael Juma 
Date:   2015-10-23T09:10:39Z

Make `Login` thread a daemon thread

This way, it won't prevent shutdown if `close` is not called on
`Consumer` or `Producer`.

commit f9a3e4e1c918c92771403cabca089092c36c1638
Author: Ismael Juma 
Date:   2015-10-23T16:59:43Z

Rename `kafka.security.auth.to.local` to 
`sasl.kerberos.principal.to.local.rules`

Also improve wording for `SaslConfigs` docs.

commit ac58906f4c446941c43f193aaee45366dfd50950
Author: Ismael Juma 
Date:   2015-10-26T09:52:20Z

Remove unused `SASL_KAFKA_SERVER_REALM` property

commit c68554f4001979ca9283f007e20fe599c1eb85fa
Author: Ismael Juma 
Date:   2015-10-26T12:59:38Z

Remove forced reload of `Configuration` from `Login` and set JAAS property 
before starting `MiniKdc`

commit 503e2662a63bd39a1602ed73cba9b2c8fe4af55f
Author: Ismael Juma 
Date:   2015-10-27T21:22:59Z

Fix `IntegrationTestHarness` to set security configs correctly

commit 133076603671c50c4ab820f754c6ebaaedc58f15
Author: Ismael Juma 
Date:   2015-10-27T23:27:49Z

Improve logging in `ControllerChannelManager` by using `brokerNode` instead 
of `toBroker`

commit 7dd7eeff4748b28f31010196c8fbb2cb65d0da0e
Author: Ismael Juma 
Date:   2015-10-28T14:36:30Z

Introduce `LoginManager.closeAll()` and use it in `SaslTestHarness`

This is necessary to avoid authentication failures when consumers,
producers or brokers are leaked during tests.

commit 0f31db82a07b4be77cd2d95cf9d2f9eecd1343ee
Author: Ismael Juma 
Date:   2015-10-28T14:37:42Z

Improve exception handling in Sasl authenticators: avoid excessive 
exception chaining




> SASL/Kerberos follow-up
> ---
>
> Key: KAFKA-2675
> URL: https://issues.apache.org/jira/browse/KAFKA-2675
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.9.0.0
>
>
> This is a follow-up to KAFKA-1686. 
> 1. Decide on `serviceName` configuration: do we want to keep it in two places?
> 2. auth.to.local config name is a bit opaque, is there a better one?
> 3. Implement or remove SASL_KAFKA_SERVER_REALM config
> 4. Consider making Login's thread a daemon thread
> 5. Write test that shows authentication failure due to principal in JAAS file 
> not being present in MiniKDC



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2675) SASL/Kerberos follow-up

2015-10-28 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14978585#comment-14978585
 ] 

Ismael Juma commented on KAFKA-2675:


The PR fixes an issue with existing tests, but I didn't add any new ones. Due 
to the fact that we are relying on a system property to set the JAAS file with 
the principals, it seems tricky to set-up the test in a way where 
authentication fails after the broker and clients are initialised (it's easy to 
make it fail during initialisation as the `login` call fails if the principals 
are not set-up correctly). For now, it seems like system tests will be easier 
for these cases.

> SASL/Kerberos follow-up
> ---
>
> Key: KAFKA-2675
> URL: https://issues.apache.org/jira/browse/KAFKA-2675
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.9.0.0
>
>
> This is a follow-up to KAFKA-1686. 
> 1. Decide on `serviceName` configuration: do we want to keep it in two places?
> 2. auth.to.local config name is a bit opaque, is there a better one?
> 3. Implement or remove SASL_KAFKA_SERVER_REALM config
> 4. Consider making Login's thread a daemon thread



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2701) Consumer that uses Zookeeper to connect to Kafka broker, receives messages of server that is secured with SSL+Authentication

2015-10-28 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-2701.

Resolution: Not A Problem

OK, thanks for confirming.

> Consumer that uses Zookeeper to connect to Kafka broker, receives messages of 
> server that is secured with SSL+Authentication
> 
>
> Key: KAFKA-2701
> URL: https://issues.apache.org/jira/browse/KAFKA-2701
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Mohammad Abbasi
>
> I have a secured Kafka server with SSL+Authentication. Secured and 
> authenticated consumers and producers work OK with this server and 
> non-configured with SSL consumers and producer cannot send messages to or 
> receive messages from secured Kafka server when they are connected 
> "directly"(I mean not through the Zookeeper) to the server. 
> But when non-authenticated consumer connects through Zookeeper to the broker, 
> receives message from secured Kafka server. Is this a bug? or if it's OK, why 
> non-authenticated consumer can receive messages from Kafka server which 
> requires authentication through SSL?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2675) SASL/Kerberos follow-up

2015-10-28 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2675:
---
Description: 
This is a follow-up to KAFKA-1686. 

1. Decide on `serviceName` configuration: do we want to keep it in two places?
2. auth.to.local config name is a bit opaque, is there a better one?
3. Implement or remove SASL_KAFKA_SERVER_REALM config
4. Consider making Login's thread a daemon thread

  was:
This is a follow-up to KAFKA-1686. 

1. Decide on `serviceName` configuration: do we want to keep it in two places?
2. auth.to.local config name is a bit opaque, is there a better one?
3. Implement or remove SASL_KAFKA_SERVER_REALM config
4. Consider making Login's thread a daemon thread
5. Write test that shows authentication failure due to principal in JAAS file 
not being present in MiniKDC


> SASL/Kerberos follow-up
> ---
>
> Key: KAFKA-2675
> URL: https://issues.apache.org/jira/browse/KAFKA-2675
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.9.0.0
>
>
> This is a follow-up to KAFKA-1686. 
> 1. Decide on `serviceName` configuration: do we want to keep it in two places?
> 2. auth.to.local config name is a bit opaque, is there a better one?
> 3. Implement or remove SASL_KAFKA_SERVER_REALM config
> 4. Consider making Login's thread a daemon thread



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2701) Consumer that uses Zookeeper to connect to Kafka broker, receives messages of server that is secured with SSL+Authentication

2015-10-28 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14978464#comment-14978464
 ] 

Ismael Juma commented on KAFKA-2701:


Hi @[~mabbasi90.class], did you disable the PLAINTEXT port in the listeners 
config?

> Consumer that uses Zookeeper to connect to Kafka broker, receives messages of 
> server that is secured with SSL+Authentication
> 
>
> Key: KAFKA-2701
> URL: https://issues.apache.org/jira/browse/KAFKA-2701
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Mohammad Abbasi
>
> I have a secured Kafka server with SSL+Authentication. Secured and 
> authenticated consumers and producers work OK with this server and 
> non-configured with SSL consumers and producer cannot send messages to or 
> receive messages from secured Kafka server when they are connected 
> "directly"(I mean not through the Zookeeper) to the server. 
> But when non-authenticated consumer connects through Zookeeper to the broker, 
> receives message from secured Kafka server. Is this a bug? or if it's OK, why 
> non-authenticated consumer can receive messages from Kafka server which 
> requires authentication through SSL?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2691) Improve handling of authorization failure during metadata refresh

2015-10-28 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14978578#comment-14978578
 ] 

Ismael Juma commented on KAFKA-2691:


According to Jason, we don't need KAFKA-2391. Will update this once Parth 
verifies this in the tests written as part of KAFKA-2598.

> Improve handling of authorization failure during metadata refresh
> -
>
> Key: KAFKA-2691
> URL: https://issues.apache.org/jira/browse/KAFKA-2691
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
> Fix For: 0.9.0.0
>
>
> There are two problems, one more severe than the other:
> 1. The consumer blocks indefinitely if there is non-transient authorization 
> failure during metadata refresh due to KAFKA-2391
> 2. We get a TimeoutException instead of an AuthorizationException in the 
> producer for the same case
> If the fix for KAFKA-2391 is to add a timeout, then we will have issue `2` in 
> both producer and consumer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2701) Consumer that uses Zookeeper to connect to Kafka broker, receives messages of server that is secured with SSL+Authentication

2015-10-28 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14978486#comment-14978486
 ] 

Ismael Juma commented on KAFKA-2701:


That is correct, if you want a fully secure broker, you need to use SSL for 
inter-broker communication and disable the PLAINTEXT port. If the PLAINTEXT 
port is not disabled, it can be used by anyone.

> Consumer that uses Zookeeper to connect to Kafka broker, receives messages of 
> server that is secured with SSL+Authentication
> 
>
> Key: KAFKA-2701
> URL: https://issues.apache.org/jira/browse/KAFKA-2701
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Mohammad Abbasi
>
> I have a secured Kafka server with SSL+Authentication. Secured and 
> authenticated consumers and producers work OK with this server and 
> non-configured with SSL consumers and producer cannot send messages to or 
> receive messages from secured Kafka server when they are connected 
> "directly"(I mean not through the Zookeeper) to the server. 
> But when non-authenticated consumer connects through Zookeeper to the broker, 
> receives message from secured Kafka server. Is this a bug? or if it's OK, why 
> non-authenticated consumer can receive messages from Kafka server which 
> requires authentication through SSL?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1641) Log cleaner exits if last cleaned offset is lower than earliest offset

2015-10-28 Thread Denis Zhdanov (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14977958#comment-14977958
 ] 

Denis Zhdanov commented on KAFKA-1641:
--

Ah, according to Github and release plan 0.9.0 is planned for Nov 2015 and no 
releases planned for 0.8.x *sigh*


> Log cleaner exits if last cleaned offset is lower than earliest offset
> --
>
> Key: KAFKA-1641
> URL: https://issues.apache.org/jira/browse/KAFKA-1641
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.1.1
>Reporter: Joel Koshy
>Assignee: Guozhang Wang
> Fix For: 0.9.0.0
>
> Attachments: KAFKA-1641.patch, KAFKA-1641_2014-10-09_13:04:15.patch
>
>
> Encountered this recently: the log cleaner exited a while ago (I think 
> because the topic had compressed messages). That issue was subsequently 
> addressed by having the producer only send uncompressed. However, on a 
> subsequent restart of the broker we see this:
> In this scenario I think it is reasonable to just emit a warning and have the 
> cleaner round up its first dirty offset to the base offset of the first 
> segment.
> {code}
> [kafka-server] [] [kafka-log-cleaner-thread-0], Error due to 
> java.lang.IllegalArgumentException: requirement failed: Last clean offset is 
> 54770438 but segment base offset is 382844024 for log testtopic-0.
> at scala.Predef$.require(Predef.scala:145)
> at kafka.log.Cleaner.buildOffsetMap(LogCleaner.scala:491)
> at kafka.log.Cleaner.clean(LogCleaner.scala:288)
> at 
> kafka.log.LogCleaner$CleanerThread.cleanOrSleep(LogCleaner.scala:202)
> at kafka.log.LogCleaner$CleanerThread.doWork(LogCleaner.scala:187)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:51)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2688; Avoid forcing reload of `Configura...

2015-10-28 Thread ijuma
Github user ijuma closed the pull request at:

https://github.com/apache/kafka/pull/359


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-1830) Closing socket connection to /10.118.192.104. (kafka.network.Processor)

2015-10-28 Thread Stephen Powis (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14978663#comment-14978663
 ] 

Stephen Powis commented on KAFKA-1830:
--

Running 0.8.2.1 and this is still an INFO log.  Reviewing the release notes 
back thru 8.1.0 I don't see KAFKA-1830 included anywhere.

> Closing socket connection to /10.118.192.104. (kafka.network.Processor)
> ---
>
> Key: KAFKA-1830
> URL: https://issues.apache.org/jira/browse/KAFKA-1830
> Project: Kafka
>  Issue Type: Test
>  Components: log
>Affects Versions: 0.8.1
> Environment: Linux OS, 5 node CDH5.12 cluster, Scala 2.10.4
>Reporter: Tapas Swain
>Priority: Critical
>
> I was testing Spark-Kafka integration . Created one producer which pushes 
> data to kafka topic. One consumer reads that data and processes it and 
> publish results to another kafka topic. Suddenly the following log is seen in 
> the console.
> [2014-12-26 15:20:04,643] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:04,848] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:05,053] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:05,257] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:05,462] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:05,666] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:05,870] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:06,074] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:06,280] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:06,484] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:06,689] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:06,911] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:07,116] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:07,320] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:07,525] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:07,729] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:07,934] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:08,140] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:08,345] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:08,551] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:08,756] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:08,960] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:09,165] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:09,370] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:09,574] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:09,778] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:09,983] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:10,189] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:10,394] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:10,599] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:10,804] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:11,009] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:11,214] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:11,418] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:11,623] INFO Closing socket connection to /10.118.192.104. 
> (kafka.network.Processor)
> [2014-12-26 15:20:11,827] INFO Closing socket connection to /10.118.192.104. 
> 

[jira] [Commented] (KAFKA-2688) Avoid forcing reload of `Configuration`

2015-10-28 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14978649#comment-14978649
 ] 

Ismael Juma commented on KAFKA-2688:


I removed the forced reload for the SASL code in 
https://github.com/apache/kafka/pull/376

> Avoid forcing reload of `Configuration`
> ---
>
> Key: KAFKA-2688
> URL: https://issues.apache.org/jira/browse/KAFKA-2688
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Reporter: Ismael Juma
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> We currently call `Configuration.setConfiguration(null)` from a couple of 
> places in our codebase (`Login` and `JaasUtils`) to force `Configuration` to 
> be reloaded. If this code is removed, some tests can fail depending on the 
> test execution order.
> Ideally we would not need to call `setConfiguration(null)` outside of tests. 
> Investigate if this is possible. If not, we should at least ensure that 
> reloads are done in a safe way within our codebase (perhaps using a lock).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2688) Avoid forcing reload of `Configuration`

2015-10-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14978648#comment-14978648
 ] 

ASF GitHub Bot commented on KAFKA-2688:
---

Github user ijuma closed the pull request at:

https://github.com/apache/kafka/pull/359


> Avoid forcing reload of `Configuration`
> ---
>
> Key: KAFKA-2688
> URL: https://issues.apache.org/jira/browse/KAFKA-2688
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Reporter: Ismael Juma
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> We currently call `Configuration.setConfiguration(null)` from a couple of 
> places in our codebase (`Login` and `JaasUtils`) to force `Configuration` to 
> be reloaded. If this code is removed, some tests can fail depending on the 
> test execution order.
> Ideally we would not need to call `setConfiguration(null)` outside of tests. 
> Investigate if this is possible. If not, we should at least ensure that 
> reloads are done in a safe way within our codebase (perhaps using a lock).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2691) Improve handling of authorization failure during metadata refresh

2015-10-28 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14978624#comment-14978624
 ] 

Ismael Juma commented on KAFKA-2691:


[~hachikuji], I think that's right, but we'll know for sure once the branch for 
KAFKA-2598 is rebased to include your fix.

> Improve handling of authorization failure during metadata refresh
> -
>
> Key: KAFKA-2691
> URL: https://issues.apache.org/jira/browse/KAFKA-2691
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
> Fix For: 0.9.0.0
>
>
> There are two problems, one more severe than the other:
> 1. The consumer blocks indefinitely if there is non-transient authorization 
> failure during metadata refresh due to KAFKA-2391
> 2. We get a TimeoutException instead of an AuthorizationException in the 
> producer for the same case
> If the fix for KAFKA-2391 is to add a timeout, then we will have issue `2` in 
> both producer and consumer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2675; SASL/Kerberos follow up

2015-10-28 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/376


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-2592) Stop Writing the Change-log in store.put() / delete() for Non-transactional Store

2015-10-28 Thread Yasuhiro Matsuda (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yasuhiro Matsuda resolved KAFKA-2592.
-
Resolution: Won't Fix

> Stop Writing the Change-log in store.put() / delete() for Non-transactional 
> Store
> -
>
> Key: KAFKA-2592
> URL: https://issues.apache.org/jira/browse/KAFKA-2592
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Assignee: Yasuhiro Matsuda
> Fix For: 0.9.0.0
>
>
> Today we keep a dirty threshold and try to send to change-log in store.put() 
> / delete() when the threshold has been exceeded. Doing this will largely 
> increase the likelihood of inconsistent state upon unclean shutdown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2700) delete topic should remove the corresponding ACL and configs

2015-10-28 Thread Parth Brahmbhatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Brahmbhatt reassigned KAFKA-2700:
---

Assignee: Parth Brahmbhatt

> delete topic should remove the corresponding ACL and configs
> 
>
> Key: KAFKA-2700
> URL: https://issues.apache.org/jira/browse/KAFKA-2700
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Parth Brahmbhatt
>
> After a topic is successfully deleted, we should also remove any ACL, configs 
> and perhaps committed offsets associated with topic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #728

2015-10-28 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] MINOR: Clean-up MemoryRecords variables and APIs

--
[...truncated 425 lines...]
if (!Console.readLine().equalsIgnoreCase("y")) {
 ^
:389:
 class BrokerEndPoint in object UpdateMetadataRequest is deprecated: see 
corresponding Javadoc for more information.
  new UpdateMetadataRequest.BrokerEndPoint(brokerEndPoint.id, 
brokerEndPoint.host, brokerEndPoint.port)
^
:391:
 constructor UpdateMetadataRequest in class UpdateMetadataRequest is 
deprecated: see corresponding Javadoc for more information.
new UpdateMetadataRequest(controllerId, controllerEpoch, 
liveBrokers.asJava, partitionStates.asJava)
^
:129:
 method readFromReadableChannel in class NetworkReceive is deprecated: see 
corresponding Javadoc for more information.
  response.readFromReadableChannel(channel)
   ^
there were 15 feature warnings; re-run with -feature for details
18 warnings found
:kafka-trunk-jdk7:core:processResources UP-TO-DATE
:kafka-trunk-jdk7:core:classes
:kafka-trunk-jdk7:log4j-appender:javadoc UP-TO-DATE
:kafka-trunk-jdk7:core:javadoc
:kafka-trunk-jdk7:core:javadocJar
:kafka-trunk-jdk7:core:scaladoc
[ant:scaladoc] Element 
' 
does not exist.
[ant:scaladoc] 
:293:
 warning: a pure expression does nothing in statement position; you may be 
omitting necessary parentheses
[ant:scaladoc] ControllerStats.uncleanLeaderElectionRate
[ant:scaladoc] ^
[ant:scaladoc] 
:294:
 warning: a pure expression does nothing in statement position; you may be 
omitting necessary parentheses
[ant:scaladoc] ControllerStats.leaderElectionTimer
[ant:scaladoc] ^
[ant:scaladoc] warning: there were 15 feature warnings; re-run with -feature 
for details
[ant:scaladoc] 
:28:
 warning: Could not find any member to link for "NoReplicaOnlineException".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:1160:
 warning: Could not find any member to link for "Exception".
[ant:scaladoc] /**
[ant:scaladoc] ^
[ant:scaladoc] 
:1334:
 warning: Could not find any member to link for "Exception".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:1293:
 warning: Could not find any member to link for "Exception".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:490:
 warning: Could not find any member to link for "Exception".
[ant:scaladoc] /**
[ant:scaladoc] ^
[ant:scaladoc] 
:455:
 warning: Could not find any member to link for "Exception".
[ant:scaladoc] /**
[ant:scaladoc] ^
[ant:scaladoc] 
:1276:
 warning: Could not find any member to link for "Exception".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:1250:
 warning: Could not find any member to link for "Exception".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:1438:
 warning: Could not find any member to link for "Exception".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:1415:
 warning: Could not find any member to link for "Exception".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 

[jira] [Commented] (KAFKA-2644) Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL

2015-10-28 Thread Rajini Sivaram (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14978916#comment-14978916
 ] 

Rajini Sivaram commented on KAFKA-2644:
---

[~ijuma] All the SASL tests are now passing in the Confluent build 
(http://jenkins.confluent.io/job/kafka_system_tests_branch_builder/135/console).
 But the tests now take an extra 2 hours to run. The current PR runs the 
console consumer sanity test, all replication tests and all benchmarks using 
both SASL_PLAINTEXT and SASL_SSL. I think there is value in running the sanity 
test and replication tests using both the SASL protocols. Would it make sense 
to restrict the number of benchmarks run with SASL? Maybe only run 
_test_producer_and_consumer_ and _test_end_to_end_latency_? Since all the tests 
are passing, it would be a trivial change to remove the tests that are not 
required.

> Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL
> 
>
> Key: KAFKA-2644
> URL: https://issues.apache.org/jira/browse/KAFKA-2644
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Rajini Sivaram
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> We need to define which of the existing ducktape tests are relevant. cc 
> [~rsivaram]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: Clean-up MemoryRecords variables and AP...

2015-10-28 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/348


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-2675) SASL/Kerberos follow-up

2015-10-28 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-2675.

Resolution: Fixed

Issue resolved by pull request 376
[https://github.com/apache/kafka/pull/376]

> SASL/Kerberos follow-up
> ---
>
> Key: KAFKA-2675
> URL: https://issues.apache.org/jira/browse/KAFKA-2675
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.9.0.0
>
>
> This is a follow-up to KAFKA-1686. 
> 1. Decide on `serviceName` configuration: do we want to keep it in two places?
> 2. auth.to.local config name is a bit opaque, is there a better one?
> 3. Implement or remove SASL_KAFKA_SERVER_REALM config
> 4. Consider making Login's thread a daemon thread



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2675) SASL/Kerberos follow-up

2015-10-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14978878#comment-14978878
 ] 

ASF GitHub Bot commented on KAFKA-2675:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/376


> SASL/Kerberos follow-up
> ---
>
> Key: KAFKA-2675
> URL: https://issues.apache.org/jira/browse/KAFKA-2675
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.9.0.0
>
>
> This is a follow-up to KAFKA-1686. 
> 1. Decide on `serviceName` configuration: do we want to keep it in two places?
> 2. auth.to.local config name is a bit opaque, is there a better one?
> 3. Implement or remove SASL_KAFKA_SERVER_REALM config
> 4. Consider making Login's thread a daemon thread



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: KAFKA-2371 follow-up, DistributedHerder...

2015-10-28 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/360


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] KIP-40 - Extend GroupMetadata request to return group and member status

2015-10-28 Thread Gwen Shapira
Looks awesome to me :)

This will allow to both list all groups and to retrieve offsets for
specific groups.

Since 3 days passed with no comments, would you like to start a vote?

On Sun, Oct 25, 2015 at 6:29 PM, Jason Gustafson  wrote:
> Hi Kafka Devs,
>
> Currently, the new consumer provides no way to view a group's status except
> by inspecting coordinator and consumer logs. This includes listing the
> members of the group and their partition assignments. For the old consumer,
> tools could read this information directly from Zookeeper, but with
> persistence up in the air for the new consumer, that may not be possible.
> Even if it were, we might prefer to use a request API (in line with KIP-4)
> since that keeps tooling decoupled from the storage system and makes access
> control easier. Along those lines, I've created KIP-40 to solve this
> problem by extending the GroupMetadata request (formerly known as the
> ConsumerMetadata request). Have a look and let me know what you think!
>
> KIP-40:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-40%3A+GroupMetadata+request+enhancement
>
>
> Thanks,
> Jason


[jira] [Commented] (KAFKA-2017) Persist Coordinator State for Coordinator Failover

2015-10-28 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14979146#comment-14979146
 ] 

Guozhang Wang commented on KAFKA-2017:
--

[~onurkaraman] [~becket_qin] [~jjkoshy] I have discussed with [~junrao] and 
[~hachikuji] about various options for 0.9.0:

1. We realized relaxing the generation id check for commit offset while not 
persisting grouping state does not perfectly solve the problem. Since when 
coordinator migrates, consumers will 1) discover the new coordinator, 2) send 
the HB request as scheduled without stop fetching. With a group of more than 
one consumer, it is likely that a first consumer member will find the new 
coordinator and send the HB request, then got the error back and rejoin the 
group. Since today coordinator will immediately create the group when receiving 
a join group request from an unknown group for the first time and finish the 
join-group phase immediately, right after that other consumer member's commit 
request will be rejected, hence still causing duplicates during coordinator 
migration.

We have talked about delaying the creation of the group up to the session 
timeout for the first-ever join group, or relax the offset commit checking 
further to completely ignore the group id and always blindly accepts the 
requests. But those solutions also have their problems, as the former approach 
could delay the creation of the group by 30 seconds (as the default value of 
session timeout), and the latter approach cannot distinguish consumers using 
Kafka for coordinations with other consumers that get assignments themselves. 
So we think it is still necessary to have this feature in 0.9.0 release.

2. We also went through implementation details to enforce persistency in Kafka, 
and felt that it still have many tricky cases to be done right, for example:

a) If we are going to use two topics, one for offset and one for group 
metadata, then we need to make sure these two topics will ALWAYS have the same 
leader (i.e. the coordinator) for their partitions. However, with the current 
reassignment mechanism, consecutive reassignments from bouncing brokers / 
broker failures cannot easily ensure that is the case. We can of course 
refactor the offset manager as a general key-value storage with multiple 
topics, but that is a much larger feature to add that is way beyond the scope 
of 0.9.0.

b) If we are going to use the same topics with the new message format as Joel 
proposed, it is not clear how we can use log compaction to delete the old 
formatted messages as they will be different keys. If we are going to keep 
messages of both versions, it will further increase the latency of loading the 
whole log for consumer group metadata upon coordinator migration, and also we 
need to change the caching layer behavior to be able to override values while 
loading offsets from logs.

c) Instead, what we can do with the same topics is to use the key version as 
the "type indicator": since both key and value have their own versions, we can 
use key version number to indicate the type of the message, for 0 it is the 
offset message, and for 1 it is the group metadata offset message. The value 
versions for offset and group metadata messages can still evolve separately; 
and we will never evolve key versions moving forward (we cannot do this even 
today anyways because of log compaction), but just change the topic if we ever 
have to do so.

With this proposal: 1) OffsetManager will become ConsumerGroupManager, thought 
its related config names will still be "offsetXXX" for now since they are 
public, 2) loading a message from log will either return an offset object or 
group metadata object, both of which will be kept inside ConsumerGroupManager's 
cache, 3) we will store the assignment along with the metadata only after the 
sync phase is complete; for MM this assignment could be large and hence we may 
want to reconfig the "offsetMaxRecordSize" to handle this, 4) we will still 
need KIP-40 for querying the group metadata / assignment from 
ConsumerGroupManager's cache.

Thoughts?

> Persist Coordinator State for Coordinator Failover
> --
>
> Key: KAFKA-2017
> URL: https://issues.apache.org/jira/browse/KAFKA-2017
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: Onur Karaman
>Assignee: Guozhang Wang
> Fix For: 0.9.0.0
>
> Attachments: KAFKA-2017.patch, KAFKA-2017_2015-05-20_09:13:39.patch, 
> KAFKA-2017_2015-05-21_19:02:47.patch
>
>
> When a coordinator fails, the group membership protocol tries to failover to 
> a new coordinator without forcing all the consumers rejoin their groups. This 
> is possible if the coordinator persists its state so that the state can be 
> 

[jira] [Comment Edited] (KAFKA-2017) Persist Coordinator State for Coordinator Failover

2015-10-28 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14979146#comment-14979146
 ] 

Guozhang Wang edited comment on KAFKA-2017 at 10/28/15 8:08 PM:


[~onurkaraman] [~becket_qin] [~jjkoshy] I have discussed with [~junrao] and 
[~hachikuji] about various options for 0.9.0:

1. We realized relaxing the generation id check for commit offset while not 
persisting grouping state does not perfectly solve the problem. Since when 
coordinator migrates, consumers will 1) discover the new coordinator, 2) send 
the HB request as scheduled without stop fetching. With a group of more than 
one consumer, it is likely that a first consumer member will find the new 
coordinator and send the HB request, then got the error back and rejoin the 
group. Since today coordinator will immediately create the group when receiving 
a join group request from an unknown group for the first time and finish the 
join-group phase immediately, right after that other consumer member's commit 
request will be rejected, hence still causing duplicates during coordinator 
migration.

We have talked about delaying the creation of the group up to the session 
timeout for the first-ever join group, or relax the offset commit checking 
further to completely ignore the group id and always blindly accepts the 
requests. But those solutions also have their problems, as the former approach 
could delay the creation of the group by 30 seconds (as the default value of 
session timeout), and the latter approach cannot distinguish consumers using 
Kafka for coordinations with other consumers that get assignments themselves. 
So we think it is still necessary to have this feature in 0.9.0 release.

2. We also went through implementation details to enforce persistency in Kafka:

a) If we are going to use two topics, one for offset and one for group 
metadata, then we need to make sure these two topics will ALWAYS have the same 
leader (i.e. the coordinator) for their partitions. However, with the current 
reassignment mechanism, consecutive reassignments from bouncing brokers / 
broker failures cannot easily ensure that is the case. We can of course 
refactor the offset manager as a general key-value storage with multiple 
topics, but that is a much larger feature to add that is way beyond the scope 
of 0.9.0.

b) If we are going to use the same topics with the new message format as Joel 
proposed, it is not clear how we can use log compaction to delete the old 
formatted messages as they will be different keys. If we are going to keep 
messages of both versions, it will further increase the latency of loading the 
whole log for consumer group metadata upon coordinator migration, and also we 
need to change the caching layer behavior to be able to override values while 
loading offsets from logs.

c) Instead, what we can do with the same topics is to use the key version as 
the "type indicator": since both key and value have their own versions, we can 
use key version number to indicate the type of the message, for 0 it is the 
offset message, and for 1 it is the group metadata offset message. The value 
versions for offset and group metadata messages can still evolve separately; 
and we will never evolve key versions moving forward (we cannot do this even 
today anyways because of log compaction), but just change the topic if we ever 
have to do so.

With this proposal: 1) OffsetManager will become ConsumerGroupManager, thought 
its related config names will still be "offsetXXX" for now since they are 
public, 2) loading a message from log will either return an offset object or 
group metadata object, both of which will be kept inside ConsumerGroupManager's 
cache, 3) we will store the assignment along with the metadata only after the 
sync phase is complete; for MM this assignment could be large and hence we may 
want to reconfig the "offsetMaxRecordSize" to handle this, 4) we will still 
need KIP-40 for querying the group metadata / assignment from 
ConsumerGroupManager's cache.

Thoughts?


was (Author: guozhang):
[~onurkaraman] [~becket_qin] [~jjkoshy] I have discussed with [~junrao] and 
[~hachikuji] about various options for 0.9.0:

1. We realized relaxing the generation id check for commit offset while not 
persisting grouping state does not perfectly solve the problem. Since when 
coordinator migrates, consumers will 1) discover the new coordinator, 2) send 
the HB request as scheduled without stop fetching. With a group of more than 
one consumer, it is likely that a first consumer member will find the new 
coordinator and send the HB request, then got the error back and rejoin the 
group. Since today coordinator will immediately create the group when receiving 
a join group request from an unknown group for the first time and finish the 
join-group phase immediately, right after that other consumer member's 

[jira] [Commented] (KAFKA-2371) Add distributed coordinator implementation for Copycat

2015-10-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14979107#comment-14979107
 ] 

ASF GitHub Bot commented on KAFKA-2371:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/360


> Add distributed coordinator implementation for Copycat
> --
>
> Key: KAFKA-2371
> URL: https://issues.apache.org/jira/browse/KAFKA-2371
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.0
>
>
> Copycat needs a Coordinator implementation that handles multiple Workers that 
> automatically manage the distribution of connectors and tasks across them. To 
> start, this implementation should handle any connectors that have been 
> registered via either a CLI or REST interface for starting/stopping 
> connectors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2502) Quotas documentation for 0.8.3

2015-10-28 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14979144#comment-14979144
 ] 

Gwen Shapira commented on KAFKA-2502:
-

Any updates, [~aauradkar]? Its a blocker and we are looking to cut branches :)

> Quotas documentation for 0.8.3
> --
>
> Key: KAFKA-2502
> URL: https://issues.apache.org/jira/browse/KAFKA-2502
> Project: Kafka
>  Issue Type: Task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>Priority: Blocker
>  Labels: quotas
> Fix For: 0.9.0.0
>
>
> Complete quotas documentation
> Also, 
> https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol
>  needs to be updated with protocol changes introduced in KAFKA-2136



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #66

2015-10-28 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] MINOR: Clean-up MemoryRecords variables and APIs

--
[...truncated 4675 lines...]

org.apache.kafka.copycat.json.JsonConverterTest > nullSchemaAndMapToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > stringToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > timestampToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > 
testCopycatSchemaMetadataTranslation PASSED

org.apache.kafka.copycat.json.JsonConverterTest > timestampToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > decimalToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mapToCopycatStringKeys PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mapToJsonNonStringKeys PASSED

org.apache.kafka.copycat.json.JsonConverterTest > longToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mismatchSchemaJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > 
testCacheSchemaToCopycatConversion PASSED

org.apache.kafka.copycat.json.JsonConverterTest > 
testJsonSchemaMetadataTranslation PASSED

org.apache.kafka.copycat.json.JsonConverterTest > bytesToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > shortToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > intToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > structToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > stringToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > nullSchemaAndArrayToJson 
PASSED

org.apache.kafka.copycat.json.JsonConverterTest > byteToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > nullSchemaPrimitiveToCopycat 
PASSED

org.apache.kafka.copycat.json.JsonConverterTest > byteToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > intToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > dateToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > noSchemaToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > noSchemaToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > nullSchemaAndPrimitiveToJson 
PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mapToJsonStringKeys PASSED

org.apache.kafka.copycat.json.JsonConverterTest > arrayToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > nullToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > timeToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > structToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > shortToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > dateToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > doubleToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > timeToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > floatToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > decimalToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > arrayToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > booleanToJson PASSED

org.apache.kafka.copycat.json.JsonConverterTest > mapToCopycatNonStringKeys 
PASSED

org.apache.kafka.copycat.json.JsonConverterTest > bytesToCopycat PASSED

org.apache.kafka.copycat.json.JsonConverterTest > doubleToCopycat PASSED
:copycat:runtime:checkstyleMain
:copycat:runtime:compileTestJavawarning: [options] bootstrap class path not set 
in conjunction with -source 1.7
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
1 warning

:copycat:runtime:processTestResources
:copycat:runtime:testClasses
:copycat:runtime:checkstyleTest
:copycat:runtime:test

org.apache.kafka.copycat.storage.KafkaOffsetBackingStoreTest > testSetFailure 
PASSED

org.apache.kafka.copycat.storage.KafkaOffsetBackingStoreTest > testMissingTopic 
PASSED

org.apache.kafka.copycat.storage.KafkaOffsetBackingStoreTest > testStartStop 
PASSED

org.apache.kafka.copycat.storage.KafkaOffsetBackingStoreTest > 
testReloadOnStart PASSED

org.apache.kafka.copycat.storage.KafkaOffsetBackingStoreTest > testGetSet PASSED

org.apache.kafka.copycat.storage.OffsetStorageWriterTest > testWriteFlush PASSED

org.apache.kafka.copycat.storage.OffsetStorageWriterTest > 
testWriteNullValueFlush PASSED

org.apache.kafka.copycat.storage.OffsetStorageWriterTest > 
testWriteNullKeyFlush PASSED

org.apache.kafka.copycat.storage.OffsetStorageWriterTest > testNoOffsetsToFlush 
PASSED

org.apache.kafka.copycat.storage.OffsetStorageWriterTest > 
testFlushFailureReplacesOffsets PASSED

org.apache.kafka.copycat.storage.OffsetStorageWriterTest > testAlreadyFlushing 
PASSED

org.apache.kafka.copycat.storage.OffsetStorageWriterTest > 
testCancelBeforeAwaitFlush PASSED

org.apache.kafka.copycat.storage.OffsetStorageWriterTest > 
testCancelAfterAwaitFlush PASSED


Build failed in Jenkins: kafka-trunk-jdk8 #67

2015-10-28 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-2675; SASL/Kerberos follow up

--
[...truncated 363 lines...]
:kafka-trunk-jdk8:log4j-appender:compileJava UP-TO-DATE
:kafka-trunk-jdk8:log4j-appender:processResources UP-TO-DATE
:kafka-trunk-jdk8:log4j-appender:classes UP-TO-DATE
:kafka-trunk-jdk8:log4j-appender:jar UP-TO-DATE
:kafka-trunk-jdk8:core:compileJava UP-TO-DATE
:kafka-trunk-jdk8:core:compileScala UP-TO-DATE
:kafka-trunk-jdk8:core:processResources UP-TO-DATE
:kafka-trunk-jdk8:core:classes UP-TO-DATE
:kafka-trunk-jdk8:log4j-appender:javadoc
:kafka-trunk-jdk8:core:javadoc
:kafka-trunk-jdk8:core:javadocJar
:kafka-trunk-jdk8:core:scaladoc
[ant:scaladoc] Element 
' 
does not exist.
[ant:scaladoc] 
:293:
 warning: a pure expression does nothing in statement position; you may be 
omitting necessary parentheses
[ant:scaladoc] ControllerStats.uncleanLeaderElectionRate
[ant:scaladoc] ^
[ant:scaladoc] 
:294:
 warning: a pure expression does nothing in statement position; you may be 
omitting necessary parentheses
[ant:scaladoc] ControllerStats.leaderElectionTimer
[ant:scaladoc] ^
[ant:scaladoc] warning: there were 15 feature warning(s); re-run with -feature 
for details
[ant:scaladoc] 
:72:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#offer".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:32:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#offer".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:137:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#poll".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:120:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#poll".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:97:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#put".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:152:
 warning: Could not find any member to link for 
"java.util.concurrent.BlockingQueue#take".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 9 warnings found
:kafka-trunk-jdk8:core:scaladocJar
:kafka-trunk-jdk8:core:docsJar
:docsJar_2_11_7
Building project 'core' with Scala version 2.11.7
:kafka-trunk-jdk8:clients:compileJava UP-TO-DATE
:kafka-trunk-jdk8:clients:processResources UP-TO-DATE
:kafka-trunk-jdk8:clients:classes UP-TO-DATE
:kafka-trunk-jdk8:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk8:clients:createVersionFile
:kafka-trunk-jdk8:clients:jar UP-TO-DATE
:kafka-trunk-jdk8:clients:javadoc UP-TO-DATE
:kafka-trunk-jdk8:log4j-appender:compileJava UP-TO-DATE
:kafka-trunk-jdk8:log4j-appender:processResources UP-TO-DATE
:kafka-trunk-jdk8:log4j-appender:classes UP-TO-DATE
:kafka-trunk-jdk8:log4j-appender:jar UP-TO-DATE
:kafka-trunk-jdk8:core:compileJava UP-TO-DATE
:kafka-trunk-jdk8:core:compileScalaJava HotSpot(TM) 64-Bit Server VM warning: 
ignoring option MaxPermSize=512m; support was removed in 8.0

:78:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

  ^

Re: [DISCUSS] KIP-40 - Extend GroupMetadata request to return group and member status

2015-10-28 Thread Jason Gustafson
Hey Jun,

Thanks for taking a look at this. Initially the thought was to make
GroupMetadataRequest analogous to TopicMetadataRequest: one accepts an
array of topics and returns topic metadata, the other accepts an array of
groupIds and returns group metadata. However, the analogy doesn't quite fit
since the current usage of GroupMetadataRequest in the consumer is really
just to lookup the coordinator (perhaps a better name would be
FindCoordinatorRequest?). The other issue is that we are trying to support
simple group listing and detailed group inspection in the same request.
Since group metadata can be relatively large, it would be nice to avoid
having to send it when listing groups is all you want to do. So if we agree
that we should not overload GroupMetadataRequest, then that leaves two
options:

1. We can add a new DescribeGroup request which accepts an array of
groupIds and includes an option to include/exclude member metadata (e.g. a
"verbose" flag).
2. We can do your suggestion, which is to add a new ListGroups request for
simple group listing, and DescribeGroup which only accepts a single groupId
and always returns all member metadata.

If there are no objections to adding the additional API requests, I
probably favor your suggestion since it seems the simplest and least
error-prone. However, I think the first option would also be reasonable if
we wanted to keep the API surface small. In that case, it might make sense
to rename GroupMetadataRequest to FindCoordinatorRequest, which would allow
DescribeGroupRequest to be called GroupMetadataRequest. Then the analogy
with TopicMetadataRequest would actually fit.

Thanks,
Jason



On Wed, Oct 28, 2015 at 6:32 PM, Jun Rao  wrote:

> Jason,
>
> Thanks for the writeup. Perhaps we can have two new requests:
> DescribeConsumerGroup and ListConsumerGroup. The former takes a list of
> groups and returns a list of metadata (members, group metadata, member
> metadata, etc) for each group. The latter takes nothing and just returns a
> list of groups hosted on the broker. Using an empty list to represent "all"
> can potentially generate a large response if there are many groups.
>
> Since this is marked as an 0.9.0 blocker, it would be great if other people
> can review this KIP soon.
>
> Thanks,
>
> Jun
>
>
> On Wed, Oct 28, 2015 at 3:37 PM, Ismael Juma  wrote:
>
> > On Wed, Oct 28, 2015 at 10:25 PM, Jason Gustafson 
> > wrote:
> >
> > > Hey Ashish,
> > >
> > > Yeah, that's fine with me too. I thought people kind of frowned upon
> the
> > > use of an empty topic list to get all topics, but perhaps it's more of
> an
> > > issue at the user API level.
> > >
> >
> > Yes, empty list to represent "all" is quite error-prone. In fact, we have
> > one such bug in the authorization code in trunk right now (there is a PR
> > open with a fix though).
> >
> > Ismael
> >
>


[jira] [Updated] (KAFKA-2702) ConfigDef toHtmlTable() sorts in a way that is a bit confusing

2015-10-28 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-2702:
---
Attachment: ConsumerConfig-Before.html
ConsumerConfig-After.html

Added sample ConsumerConfig output tables before and after patch for comparison.

> ConfigDef toHtmlTable() sorts in a way that is a bit confusing
> --
>
> Key: KAFKA-2702
> URL: https://issues.apache.org/jira/browse/KAFKA-2702
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Grant Henke
> Attachments: ConsumerConfig-After.html, ConsumerConfig-Before.html
>
>
> Because we put everything without default first (without prioritizing), 
> critical  parameters get placed below low priority ones when they both have 
> no defaults. Some parameters are without default and optional (SASL server in 
> ConsumerConfig for instance).
> Try printing ConsumerConfig parameters and see the mandatory group.id show up 
> as #15.
> I suggest sorting the no-default parameters by priority as well, or perhaps 
> adding a "REQUIRED" category that gets printed first no matter what.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2702) ConfigDef toHtmlTable() sorts in a way that is a bit confusing

2015-10-28 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14979782#comment-14979782
 ] 

Jay Kreps commented on KAFKA-2702:
--

Originally the presence or absence of a default indicated whether something was 
required--i.e. *all* fields are required to have a value but you can provide a 
default. Looks like later someone wanted to make a separate variable for 
whether it was required (not sure why) and they decided that if there was no 
default they would just fill in null as the default. But this change doesn't 
seem to have been fully carried out.

The original approach was actually done that way on purpose. In the new 
approach setting a config to non-required field seems to be a duplicative way 
of saying a config with a default value of null, which you could already do. 
But null is actually not always a very good default value to set so having 
people explicitly give the default for non-required fields is probably good.

For example what happens if I have a non-required int config with no default 
and I do something like
  int myValue = config.getInt(MYCONFIGNAME);
I think I will get a null pointer.

I think this happened in KAFKA-1845 so it might be good to figure out what the 
intention was in the change, probably there was some issue this fixed...

> ConfigDef toHtmlTable() sorts in a way that is a bit confusing
> --
>
> Key: KAFKA-2702
> URL: https://issues.apache.org/jira/browse/KAFKA-2702
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Grant Henke
> Attachments: ConsumerConfig-After.html, ConsumerConfig-Before.html
>
>
> Because we put everything without default first (without prioritizing), 
> critical  parameters get placed below low priority ones when they both have 
> no defaults. Some parameters are without default and optional (SASL server in 
> ConsumerConfig for instance).
> Try printing ConsumerConfig parameters and see the mandatory group.id show up 
> as #15.
> I suggest sorting the no-default parameters by priority as well, or perhaps 
> adding a "REQUIRED" category that gets printed first no matter what.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #733

2015-10-28 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] MINOR: Fix missing copyright in config file added in KAFKA-2640.

--
[...truncated 427 lines...]
 ^
:389:
 class BrokerEndPoint in object UpdateMetadataRequest is deprecated: see 
corresponding Javadoc for more information.
  new UpdateMetadataRequest.BrokerEndPoint(brokerEndPoint.id, 
brokerEndPoint.host, brokerEndPoint.port)
^
:391:
 constructor UpdateMetadataRequest in class UpdateMetadataRequest is 
deprecated: see corresponding Javadoc for more information.
new UpdateMetadataRequest(controllerId, controllerEpoch, 
liveBrokers.asJava, partitionStates.asJava)
^
:129:
 method readFromReadableChannel in class NetworkReceive is deprecated: see 
corresponding Javadoc for more information.
  response.readFromReadableChannel(channel)
   ^
there were 15 feature warnings; re-run with -feature for details
18 warnings found
:kafka-trunk-jdk7:core:processResources UP-TO-DATE
:kafka-trunk-jdk7:core:classes
:kafka-trunk-jdk7:log4j-appender:javadoc UP-TO-DATE
:kafka-trunk-jdk7:core:javadoc
:kafka-trunk-jdk7:core:javadocJar
:kafka-trunk-jdk7:core:scaladoc
[ant:scaladoc] Element 
' 
does not exist.
[ant:scaladoc] 
:293:
 warning: a pure expression does nothing in statement position; you may be 
omitting necessary parentheses
[ant:scaladoc] ControllerStats.uncleanLeaderElectionRate
[ant:scaladoc] ^
[ant:scaladoc] 
:294:
 warning: a pure expression does nothing in statement position; you may be 
omitting necessary parentheses
[ant:scaladoc] ControllerStats.leaderElectionTimer
[ant:scaladoc] ^
[ant:scaladoc] warning: there were 15 feature warnings; re-run with -feature 
for details
[ant:scaladoc] 
:28:
 warning: Could not find any member to link for "NoReplicaOnlineException".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:1160:
 warning: Could not find any member to link for "Exception".
[ant:scaladoc] /**
[ant:scaladoc] ^
[ant:scaladoc] 
:1334:
 warning: Could not find any member to link for "Exception".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:1293:
 warning: Could not find any member to link for "Exception".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:490:
 warning: Could not find any member to link for "Exception".
[ant:scaladoc] /**
[ant:scaladoc] ^
[ant:scaladoc] 
:455:
 warning: Could not find any member to link for "Exception".
[ant:scaladoc] /**
[ant:scaladoc] ^
[ant:scaladoc] 
:1276:
 warning: Could not find any member to link for "Exception".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:1250:
 warning: Could not find any member to link for "Exception".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:1438:
 warning: Could not find any member to link for "Exception".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:1415:
 warning: Could not find any member to link for "Exception".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 

Re: [DISCUSS] KIP-40 - Extend GroupMetadata request to return group and member status

2015-10-28 Thread Jun Rao
Jason,

Thanks for the writeup. Perhaps we can have two new requests:
DescribeConsumerGroup and ListConsumerGroup. The former takes a list of
groups and returns a list of metadata (members, group metadata, member
metadata, etc) for each group. The latter takes nothing and just returns a
list of groups hosted on the broker. Using an empty list to represent "all"
can potentially generate a large response if there are many groups.

Since this is marked as an 0.9.0 blocker, it would be great if other people
can review this KIP soon.

Thanks,

Jun


On Wed, Oct 28, 2015 at 3:37 PM, Ismael Juma  wrote:

> On Wed, Oct 28, 2015 at 10:25 PM, Jason Gustafson 
> wrote:
>
> > Hey Ashish,
> >
> > Yeah, that's fine with me too. I thought people kind of frowned upon the
> > use of an empty topic list to get all topics, but perhaps it's more of an
> > issue at the user API level.
> >
>
> Yes, empty list to represent "all" is quite error-prone. In fact, we have
> one such bug in the authorization code in trunk right now (there is a PR
> open with a fix though).
>
> Ismael
>


[jira] [Commented] (KAFKA-2640) Add tests for ZK authentication

2015-10-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14979687#comment-14979687
 ] 

ASF GitHub Bot commented on KAFKA-2640:
---

GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/377

MINOR: Fix missing copyright in config file added in KAFKA-2640.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka minor-jaas-config-copyright

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/377.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #377


commit 7d5c7c79cfbfa037508ebff60551a7c11b7568c4
Author: Ewen Cheslack-Postava 
Date:   2015-10-29T02:35:03Z

MINOR: Fix missing copyright in config file added in KAFKA-2640.




> Add tests for ZK authentication
> ---
>
> Key: KAFKA-2640
> URL: https://issues.apache.org/jira/browse/KAFKA-2640
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Flavio Junqueira
>Assignee: Flavio Junqueira
> Fix For: 0.9.0.0
>
>
> Add tests for KAKA-2639.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2369: Add REST API for Copycat.

2015-10-28 Thread ewencp
GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/378

KAFKA-2369: Add REST API for Copycat.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka kafka-2369-copycat-rest-api

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/378.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #378


commit 15cea5a9e69a78743886bf602cac180cb4aa05f0
Author: Ewen Cheslack-Postava 
Date:   2015-10-26T16:39:12Z

KAFKA-2369: Add REST API for Copycat.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: MINOR: Fix missing copyright in config file ad...

2015-10-28 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/377


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2640) Add tests for ZK authentication

2015-10-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14979746#comment-14979746
 ] 

ASF GitHub Bot commented on KAFKA-2640:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/377


> Add tests for ZK authentication
> ---
>
> Key: KAFKA-2640
> URL: https://issues.apache.org/jira/browse/KAFKA-2640
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Flavio Junqueira
>Assignee: Flavio Junqueira
> Fix For: 0.9.0.0
>
>
> Add tests for KAKA-2639.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2702) ConfigDef toHtmlTable() sorts in a way that is a bit confusing

2015-10-28 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14979786#comment-14979786
 ] 

Jay Kreps commented on KAFKA-2702:
--

Also note that the check against NO_DEFAULT_VALUE isn't a check against the 
empty string it is a check against being the object named NO_DEFAULT_VALUE 
(i.e. reference equality vs object equality). That object happens to be an 
empty string so it prints out as nothing but that doesn't make it equal to 
other empty strings for the purpose of determining if there is a default or not.

> ConfigDef toHtmlTable() sorts in a way that is a bit confusing
> --
>
> Key: KAFKA-2702
> URL: https://issues.apache.org/jira/browse/KAFKA-2702
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Grant Henke
> Attachments: ConsumerConfig-After.html, ConsumerConfig-Before.html
>
>
> Because we put everything without default first (without prioritizing), 
> critical  parameters get placed below low priority ones when they both have 
> no defaults. Some parameters are without default and optional (SASL server in 
> ConsumerConfig for instance).
> Try printing ConsumerConfig parameters and see the mandatory group.id show up 
> as #15.
> I suggest sorting the no-default parameters by priority as well, or perhaps 
> adding a "REQUIRED" category that gets printed first no matter what.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #71

2015-10-28 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] MINOR: Fix missing copyright in config file added in KAFKA-2640.

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H11 (Ubuntu ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 47c888078d257c45c3685c514d0d4556ac46f947 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 47c888078d257c45c3685c514d0d4556ac46f947
 > git rev-list bd3fe839ce0e4a7276d19e8e873783aa0cd76707 # timeout=10
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson8005414696377586360.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.1/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:downloadWrapper UP-TO-DATE

BUILD SUCCESSFUL

Total time: 12.902 secs
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson5906553245097547595.sh
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll docsJarAll 
testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.8/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:clean UP-TO-DATE
:clients:clean
:contrib:clean UP-TO-DATE
:copycat:clean UP-TO-DATE
:core:clean
:examples:clean
:log4j-appender:clean
:streams:clean
:tools:clean
:contrib:hadoop-consumer:clean
:contrib:hadoop-producer:clean
:copycat:api:clean
:copycat:file:clean
:copycat:json:clean
:copycat:runtime:clean
:jar_core_2_10_5
Building project 'core' with Scala version 2.10.5
:kafka-trunk-jdk8:clients:compileJavawarning: [options] bootstrap class path 
not set in conjunction with -source 1.7
Note: 

 uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
1 warning

:kafka-trunk-jdk8:clients:processResources UP-TO-DATE
:kafka-trunk-jdk8:clients:classes
:kafka-trunk-jdk8:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk8:clients:createVersionFile
:kafka-trunk-jdk8:clients:jar
:kafka-trunk-jdk8:log4j-appender:compileJavawarning: [options] bootstrap class 
path not set in conjunction with -source 1.7
1 warning

:kafka-trunk-jdk8:log4j-appender:processResources UP-TO-DATE
:kafka-trunk-jdk8:log4j-appender:classes
:kafka-trunk-jdk8:log4j-appender:jar
:kafka-trunk-jdk8:core:compileJava UP-TO-DATE
:kafka-trunk-jdk8:core:compileScalaJava HotSpot(TM) 64-Bit Server VM warning: 
ignoring option MaxPermSize=512m; support was removed in 8.0

:78:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

  ^
:37:
 value DEFAULT_TIMESTAMP in object 

[jira] [Assigned] (KAFKA-2702) ConfigDef toHtmlTable() sorts in a way that is a bit confusing

2015-10-28 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke reassigned KAFKA-2702:
--

Assignee: Grant Henke

> ConfigDef toHtmlTable() sorts in a way that is a bit confusing
> --
>
> Key: KAFKA-2702
> URL: https://issues.apache.org/jira/browse/KAFKA-2702
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Grant Henke
>
> Because we put everything without default first (without prioritizing), 
> critical  parameters get placed below low priority ones when they both have 
> no defaults. Some parameters are without default and optional (SASL server in 
> ConsumerConfig for instance).
> Try printing ConsumerConfig parameters and see the mandatory group.id show up 
> as #15.
> I suggest sorting the no-default parameters by priority as well, or perhaps 
> adding a "REQUIRED" category that gets printed first no matter what.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2369) Add Copycat REST API

2015-10-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14979688#comment-14979688
 ] 

ASF GitHub Bot commented on KAFKA-2369:
---

GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/378

KAFKA-2369: Add REST API for Copycat.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka kafka-2369-copycat-rest-api

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/378.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #378


commit 15cea5a9e69a78743886bf602cac180cb4aa05f0
Author: Ewen Cheslack-Postava 
Date:   2015-10-26T16:39:12Z

KAFKA-2369: Add REST API for Copycat.




> Add Copycat REST API
> 
>
> Key: KAFKA-2369
> URL: https://issues.apache.org/jira/browse/KAFKA-2369
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.0
>
>
> Add a REST API for Copycat. At a minimum, for a single worker this should 
> support:
> * add/remove connector
> * connector status
> * task status
> * worker status
> In distributed mode this should handle forwarding if necessary, but it may 
> make sense to defer the distributed support for a later JIRA.
> This will require the addition of new dependencies to support implementing 
> the REST API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: Fix missing copyright in config file ad...

2015-10-28 Thread ewencp
GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/377

MINOR: Fix missing copyright in config file added in KAFKA-2640.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka minor-jaas-config-copyright

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/377.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #377


commit 7d5c7c79cfbfa037508ebff60551a7c11b7568c4
Author: Ewen Cheslack-Postava 
Date:   2015-10-29T02:35:03Z

MINOR: Fix missing copyright in config file added in KAFKA-2640.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2702) ConfigDef toHtmlTable() sorts in a way that is a bit confusing

2015-10-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14979754#comment-14979754
 ] 

ASF GitHub Bot commented on KAFKA-2702:
---

GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/379

KAFKA-2702: ConfigDef toHtmlTable() sorts in a way that is a bit conf…

…using

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka config-html

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/379.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #379


commit 257655b3b5a6f0a374332c6d403bbcc117844117
Author: Grant Henke 
Date:   2015-10-29T03:49:24Z

KAFKA-2702: ConfigDef toHtmlTable() sorts in a way that is a bit confusing




> ConfigDef toHtmlTable() sorts in a way that is a bit confusing
> --
>
> Key: KAFKA-2702
> URL: https://issues.apache.org/jira/browse/KAFKA-2702
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Grant Henke
>
> Because we put everything without default first (without prioritizing), 
> critical  parameters get placed below low priority ones when they both have 
> no defaults. Some parameters are without default and optional (SASL server in 
> ConsumerConfig for instance).
> Try printing ConsumerConfig parameters and see the mandatory group.id show up 
> as #15.
> I suggest sorting the no-default parameters by priority as well, or perhaps 
> adding a "REQUIRED" category that gets printed first no matter what.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2702) ConfigDef toHtmlTable() sorts in a way that is a bit confusing

2015-10-28 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-2702:
---
Status: Patch Available  (was: Open)

> ConfigDef toHtmlTable() sorts in a way that is a bit confusing
> --
>
> Key: KAFKA-2702
> URL: https://issues.apache.org/jira/browse/KAFKA-2702
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Grant Henke
>
> Because we put everything without default first (without prioritizing), 
> critical  parameters get placed below low priority ones when they both have 
> no defaults. Some parameters are without default and optional (SASL server in 
> ConsumerConfig for instance).
> Try printing ConsumerConfig parameters and see the mandatory group.id show up 
> as #15.
> I suggest sorting the no-default parameters by priority as well, or perhaps 
> adding a "REQUIRED" category that gets printed first no matter what.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2702: ConfigDef toHtmlTable() sorts in a...

2015-10-28 Thread granthenke
GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/379

KAFKA-2702: ConfigDef toHtmlTable() sorts in a way that is a bit conf…

…using

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka config-html

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/379.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #379


commit 257655b3b5a6f0a374332c6d403bbcc117844117
Author: Grant Henke 
Date:   2015-10-29T03:49:24Z

KAFKA-2702: ConfigDef toHtmlTable() sorts in a way that is a bit confusing




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2702) ConfigDef toHtmlTable() sorts in a way that is a bit confusing

2015-10-28 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14979753#comment-14979753
 ] 

Grant Henke commented on KAFKA-2702:


Looking into this a bit more...

{quote}
Try printing ConsumerConfig parameters and see the mandatory group.id show up 
as #15.
{quote}
There is an issue in ConfigDef where _NO_DEFAULT_VALUE = new String("")_, 
however an empty string is actually a valid default value. Later on in the html 
output, there is also an issue where null is interpreted as NO_DEFAULT_VALUE. 
Null could also be a valid default. 

This may also be an html output issue. If there is a default, its just empty 
string (""), maybe we should print that. Many of the string parameters have a 
default of "".

{quote}
or perhaps adding a "REQUIRED" category that gets printed first no matter 
{quote}
There is a "required" field in ConfigKey. Adding that as a column to the table 
is a good idea.

{quote}
Aren't things without default required? 
{quote}
There are many optional parameters that don't have a default, but are not 
required. Especially with the addition of many of the SSL parameters.

I think what we are looking for it prioritizing parameters that are required 
and have no default. I will submit a patch, fixing the issues mentioned above 
and adjusting the sort with that change, and we can discuss if its actually an 
improvement over what exists.








> ConfigDef toHtmlTable() sorts in a way that is a bit confusing
> --
>
> Key: KAFKA-2702
> URL: https://issues.apache.org/jira/browse/KAFKA-2702
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>
> Because we put everything without default first (without prioritizing), 
> critical  parameters get placed below low priority ones when they both have 
> no defaults. Some parameters are without default and optional (SASL server in 
> ConsumerConfig for instance).
> Try printing ConsumerConfig parameters and see the mandatory group.id show up 
> as #15.
> I suggest sorting the no-default parameters by priority as well, or perhaps 
> adding a "REQUIRED" category that gets printed first no matter what.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2674) ConsumerRebalanceListener.onPartitionsRevoked() is not called on consumer close

2015-10-28 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14979819#comment-14979819
 ] 

Jason Gustafson commented on KAFKA-2674:


[~guozhang] In the context of rebalancing, I think the meaning of 
onPrepare/onComplete should be fairly clear to the user, and the benefit is 
that these names suggest the semantics that we actually implement. Maybe 
oldAssignment is a bad argument name, but we could use currentAssignment as 
Becket suggests, or we could also use the revoked/assigned names, as below:
{code}
interface RebalanceListener {
  void onPrepare(List revokedPartitions);
  void onComplete(List assignedPartitions);
}
{code}
Haha, I've been working on this code a little too much, so it's hard for me to 
see whether this would be more intuitive to users.

> ConsumerRebalanceListener.onPartitionsRevoked() is not called on consumer 
> close
> ---
>
> Key: KAFKA-2674
> URL: https://issues.apache.org/jira/browse/KAFKA-2674
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: Michal Turek
>Assignee: Jason Gustafson
>
> Hi, I'm investigating and testing behavior of new consumer from the planned 
> release 0.9 and found an inconsistency in calling of rebalance callbacks.
> I noticed that ConsumerRebalanceListener.onPartitionsRevoked() is NOT called 
> during consumer close and application shutdown. It's JavaDoc contract says:
> - "This method will be called before a rebalance operation starts and after 
> the consumer stops fetching data."
> - "It is recommended that offsets should be committed in this callback to 
> either Kafka or a custom offset store to prevent duplicate data."
> I believe calling consumer.close() is a start of rebalance operation and even 
> the local consumer that is actually closing should be notified to be able to 
> process any rebalance logic including offsets commit (e.g. if auto-commit is 
> disabled).
> There are commented logs of current and expected behaviors.
> {noformat}
> // Application start
> 2015-10-20 15:14:02.208 INFO  o.a.k.common.utils.AppInfoParser
> [TestConsumer-worker-0]: Kafka version : 0.9.0.0-SNAPSHOT 
> (AppInfoParser.java:82)
> 2015-10-20 15:14:02.208 INFO  o.a.k.common.utils.AppInfoParser
> [TestConsumer-worker-0]: Kafka commitId : 241b9ab58dcbde0c 
> (AppInfoParser.java:83)
> // Consumer started (the first one in group), rebalance callbacks are called 
> including empty onPartitionsRevoked()
> 2015-10-20 15:14:02.333 INFO  c.a.e.kafka.newapi.TestConsumer 
> [TestConsumer-worker-0]: Rebalance callback, revoked: [] 
> (TestConsumer.java:95)
> 2015-10-20 15:14:02.343 INFO  c.a.e.kafka.newapi.TestConsumer 
> [TestConsumer-worker-0]: Rebalance callback, assigned: [testB-1, testA-0, 
> testB-0, testB-3, testA-2, testB-2, testA-1, testA-4, testB-4, testA-3] 
> (TestConsumer.java:100)
> // Another consumer joined the group, rebalancing
> 2015-10-20 15:14:17.345 INFO  o.a.k.c.c.internals.Coordinator 
> [TestConsumer-worker-0]: Attempt to heart beat failed since the group is 
> rebalancing, try to re-join group. (Coordinator.java:714)
> 2015-10-20 15:14:17.346 INFO  c.a.e.kafka.newapi.TestConsumer 
> [TestConsumer-worker-0]: Rebalance callback, revoked: [testB-1, testA-0, 
> testB-0, testB-3, testA-2, testB-2, testA-1, testA-4, testB-4, testA-3] 
> (TestConsumer.java:95)
> 2015-10-20 15:14:17.349 INFO  c.a.e.kafka.newapi.TestConsumer 
> [TestConsumer-worker-0]: Rebalance callback, assigned: [testB-3, testA-4, 
> testB-4, testA-3] (TestConsumer.java:100)
> // Consumer started closing, there SHOULD be onPartitionsRevoked() callback 
> to commit offsets like during standard rebalance, but it is missing
> 2015-10-20 15:14:39.280 INFO  c.a.e.kafka.newapi.TestConsumer [main]: 
> Closing instance (TestConsumer.java:42)
> 2015-10-20 15:14:40.264 INFO  c.a.e.kafka.newapi.TestConsumer 
> [TestConsumer-worker-0]: Worker thread stopped (TestConsumer.java:89)
> {noformat}
> Workaround is to call onPartitionsRevoked() explicitly and manually just 
> before calling consumer.close() but it seems dirty and error prone for me. It 
> can be simply forgotten be someone without such experience.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2702) ConfigDef toHtmlTable() sorts in a way that is a bit confusing

2015-10-28 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14979802#comment-14979802
 ] 

Grant Henke commented on KAFKA-2702:


[~jkreps] Good point. I was in Scala mode. I reverted the NO_DEFAULT_VALUE 
change. 

> ConfigDef toHtmlTable() sorts in a way that is a bit confusing
> --
>
> Key: KAFKA-2702
> URL: https://issues.apache.org/jira/browse/KAFKA-2702
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Grant Henke
> Attachments: ConsumerConfig-After.html, ConsumerConfig-Before.html
>
>
> Because we put everything without default first (without prioritizing), 
> critical  parameters get placed below low priority ones when they both have 
> no defaults. Some parameters are without default and optional (SASL server in 
> ConsumerConfig for instance).
> Try printing ConsumerConfig parameters and see the mandatory group.id show up 
> as #15.
> I suggest sorting the no-default parameters by priority as well, or perhaps 
> adding a "REQUIRED" category that gets printed first no matter what.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2640: Add tests for ZK authentication

2015-10-28 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/324


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-2648: group.id is required for new consu...

2015-10-28 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/362


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2502) Quotas documentation for 0.8.3

2015-10-28 Thread Aditya Auradkar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14979155#comment-14979155
 ] 

Aditya Auradkar commented on KAFKA-2502:


[~gwenshap][~ijuma] - Sorry for the delay.. been dealing with some internal 
stuff. I'll submit something by tomorrow.

> Quotas documentation for 0.8.3
> --
>
> Key: KAFKA-2502
> URL: https://issues.apache.org/jira/browse/KAFKA-2502
> Project: Kafka
>  Issue Type: Task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>Priority: Blocker
>  Labels: quotas
> Fix For: 0.9.0.0
>
>
> Complete quotas documentation
> Also, 
> https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol
>  needs to be updated with protocol changes introduced in KAFKA-2136



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2640) Add tests for ZK authentication

2015-10-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14979163#comment-14979163
 ] 

ASF GitHub Bot commented on KAFKA-2640:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/324


> Add tests for ZK authentication
> ---
>
> Key: KAFKA-2640
> URL: https://issues.apache.org/jira/browse/KAFKA-2640
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Flavio Junqueira
>Assignee: Flavio Junqueira
> Fix For: 0.9.0.0
>
>
> Add tests for KAKA-2639.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-40 - Extend GroupMetadata request to return group and member status

2015-10-28 Thread Neha Narkhede
I slightly prefer one of the rejected alternatives over the currently
suggested one - which is to add a separate DescribeGroupRequest that always
returns the member metadata for the group and any other information useful
for monitoring and tooling. It helps keep the abstractions clean and also
reduces the number of optional fields in the existing requests.

Also, TopicMetadataRequest returns information for all topics if the
requested topic is null. Is there a reason to handle this differently for
consumer group metadata?




On Wed, Oct 28, 2015 at 12:59 PM, Gwen Shapira  wrote:

> Looks awesome to me :)
>
> This will allow to both list all groups and to retrieve offsets for
> specific groups.
>
> Since 3 days passed with no comments, would you like to start a vote?
>
> On Sun, Oct 25, 2015 at 6:29 PM, Jason Gustafson 
> wrote:
> > Hi Kafka Devs,
> >
> > Currently, the new consumer provides no way to view a group's status
> except
> > by inspecting coordinator and consumer logs. This includes listing the
> > members of the group and their partition assignments. For the old
> consumer,
> > tools could read this information directly from Zookeeper, but with
> > persistence up in the air for the new consumer, that may not be possible.
> > Even if it were, we might prefer to use a request API (in line with
> KIP-4)
> > since that keeps tooling decoupled from the storage system and makes
> access
> > control easier. Along those lines, I've created KIP-40 to solve this
> > problem by extending the GroupMetadata request (formerly known as the
> > ConsumerMetadata request). Have a look and let me know what you think!
> >
> > KIP-40:
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-40%3A+GroupMetadata+request+enhancement
> >
> >
> > Thanks,
> > Jason
>



-- 
Thanks,
Neha


[jira] [Resolved] (KAFKA-2640) Add tests for ZK authentication

2015-10-28 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-2640.

   Resolution: Fixed
Fix Version/s: 0.9.0.0

Issue resolved by pull request 324
[https://github.com/apache/kafka/pull/324]

> Add tests for ZK authentication
> ---
>
> Key: KAFKA-2640
> URL: https://issues.apache.org/jira/browse/KAFKA-2640
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Flavio Junqueira
>Assignee: Flavio Junqueira
> Fix For: 0.9.0.0
>
>
> Add tests for KAKA-2639.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2687) Allow GroupMetadataRequest to return member metadata when received by group coordinator

2015-10-28 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2687:

Fix Version/s: 0.9.0.0

> Allow GroupMetadataRequest to return member metadata when received by group 
> coordinator
> ---
>
> Key: KAFKA-2687
> URL: https://issues.apache.org/jira/browse/KAFKA-2687
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Since the new consumer currently has no persistence in Zookeeper (pending 
> outcome of KAFKA-2017), there is no way for administrators to investigate 
> group status including getting the list of members in the group and their 
> partition assignments. We therefore propose to modify GroupMetadataRequest 
> (previously known as ConsumerMetadataRequest) to return group metadata when 
> received by the respective group's coordinator. When received by another 
> broker, the request will be handled as before: by only returning coordinator 
> host and port information.
> {code}
> GroupMetadataRequest => GroupId IncludeMetadata
>   GroupId => String
>   IncludeMetadata => Boolean
> GroupMetadataResponse => ErrorCode Coordinator GroupMetadata
>   ErrorCode => int16
>   Coordinator => Id Host Port
> Id => int32
> Host => string
> Port => int32
>   GroupMetadata => State ProtocolType Generation Protocol Leader  Members
> State => String
> ProtocolType => String
> Generation => int32
> Protocol => String
> Leader => String
> Members => [Member MemberMetadata MemberAssignment]
>   Member => MemberIp ClientId
> MemberIp => String
> ClientId => String
>   MemberMetadata => Bytes
>   MemberAssignment => Bytes
> {code}
> The request schema includes a flag to indicate whether metadata is needed, 
> which saves clients from having to read all group metadata when they are just 
> trying to find the coordinator. This is important to reduce group overhead 
> for use cases which involve a large number of topic subscriptions (e.g. 
> mirror maker).
> Tools will use the protocol type to determine how to parse metadata. For 
> example, when the protocolType is "consumer", the tool can use 
> ConsumerProtocol to parse the member metadata as topic subscriptions and 
> partition assignments. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2687) Allow GroupMetadataRequest to return member metadata when received by group coordinator

2015-10-28 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2687:

Priority: Blocker  (was: Major)

> Allow GroupMetadataRequest to return member metadata when received by group 
> coordinator
> ---
>
> Key: KAFKA-2687
> URL: https://issues.apache.org/jira/browse/KAFKA-2687
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Since the new consumer currently has no persistence in Zookeeper (pending 
> outcome of KAFKA-2017), there is no way for administrators to investigate 
> group status including getting the list of members in the group and their 
> partition assignments. We therefore propose to modify GroupMetadataRequest 
> (previously known as ConsumerMetadataRequest) to return group metadata when 
> received by the respective group's coordinator. When received by another 
> broker, the request will be handled as before: by only returning coordinator 
> host and port information.
> {code}
> GroupMetadataRequest => GroupId IncludeMetadata
>   GroupId => String
>   IncludeMetadata => Boolean
> GroupMetadataResponse => ErrorCode Coordinator GroupMetadata
>   ErrorCode => int16
>   Coordinator => Id Host Port
> Id => int32
> Host => string
> Port => int32
>   GroupMetadata => State ProtocolType Generation Protocol Leader  Members
> State => String
> ProtocolType => String
> Generation => int32
> Protocol => String
> Leader => String
> Members => [Member MemberMetadata MemberAssignment]
>   Member => MemberIp ClientId
> MemberIp => String
> ClientId => String
>   MemberMetadata => Bytes
>   MemberAssignment => Bytes
> {code}
> The request schema includes a flag to indicate whether metadata is needed, 
> which saves clients from having to read all group metadata when they are just 
> trying to find the coordinator. This is important to reduce group overhead 
> for use cases which involve a large number of topic subscriptions (e.g. 
> mirror maker).
> Tools will use the protocol type to determine how to parse metadata. For 
> example, when the protocolType is "consumer", the tool can use 
> ConsumerProtocol to parse the member metadata as topic subscriptions and 
> partition assignments. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2648) Coordinator should not allow empty groupIds

2015-10-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14979228#comment-14979228
 ] 

ASF GitHub Bot commented on KAFKA-2648:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/362


> Coordinator should not allow empty groupIds
> ---
>
> Key: KAFKA-2648
> URL: https://issues.apache.org/jira/browse/KAFKA-2648
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.9.0.0
>
>
> The coordinator currently allows consumer groups with empty groupIds, but 
> there probably aren't any cases where this is actually a good idea and it 
> tends to mask problems where different groups have simply not configured a 
> groupId. To address this, we can add a new error code, say INVALID_GROUP_ID, 
> which the coordinator can return when it encounters an  empty groupID. We 
> should also make groupId a required property in consumer configuration and 
> enforce that it is non-empty. 
> It's a little unclear whether this change would have compatibility concerns. 
> The old consumer will fail with an empty groupId (because it cannot create 
> the zookeeper paths), but other clients may allow it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2648) Coordinator should not allow empty groupIds

2015-10-28 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-2648.
--
   Resolution: Fixed
Fix Version/s: 0.9.0.0

Issue resolved by pull request 362
[https://github.com/apache/kafka/pull/362]

> Coordinator should not allow empty groupIds
> ---
>
> Key: KAFKA-2648
> URL: https://issues.apache.org/jira/browse/KAFKA-2648
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.9.0.0
>
>
> The coordinator currently allows consumer groups with empty groupIds, but 
> there probably aren't any cases where this is actually a good idea and it 
> tends to mask problems where different groups have simply not configured a 
> groupId. To address this, we can add a new error code, say INVALID_GROUP_ID, 
> which the coordinator can return when it encounters an  empty groupID. We 
> should also make groupId a required property in consumer configuration and 
> enforce that it is non-empty. 
> It's a little unclear whether this change would have compatibility concerns. 
> The old consumer will fail with an empty groupId (because it cannot create 
> the zookeeper paths), but other clients may allow it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : kafka-trunk-jdk7 #729

2015-10-28 Thread Apache Jenkins Server
See 



Re: [DISCUSS] KIP-40 - Extend GroupMetadata request to return group and member status

2015-10-28 Thread Ashish Singh
Jason, thanks for the great write up. I am overall in favor of changes
suggested. However, I too think that there is no specific need of
*IncludeAllGroups* flag, but that could be due to me not being aware of why
this pattern is frowned upon for TopicMetadataRequest. To me it simply
eases the use.
​

On Wed, Oct 28, 2015 at 1:20 PM, Neha Narkhede  wrote:

> I slightly prefer one of the rejected alternatives over the currently
> suggested one - which is to add a separate DescribeGroupRequest that always
> returns the member metadata for the group and any other information useful
> for monitoring and tooling. It helps keep the abstractions clean and also
> reduces the number of optional fields in the existing requests.
>
> Also, TopicMetadataRequest returns information for all topics if the
> requested topic is null. Is there a reason to handle this differently for
> consumer group metadata?
>
>
>
>
> On Wed, Oct 28, 2015 at 12:59 PM, Gwen Shapira  wrote:
>
> > Looks awesome to me :)
> >
> > This will allow to both list all groups and to retrieve offsets for
> > specific groups.
> >
> > Since 3 days passed with no comments, would you like to start a vote?
> >
> > On Sun, Oct 25, 2015 at 6:29 PM, Jason Gustafson 
> > wrote:
> > > Hi Kafka Devs,
> > >
> > > Currently, the new consumer provides no way to view a group's status
> > except
> > > by inspecting coordinator and consumer logs. This includes listing the
> > > members of the group and their partition assignments. For the old
> > consumer,
> > > tools could read this information directly from Zookeeper, but with
> > > persistence up in the air for the new consumer, that may not be
> possible.
> > > Even if it were, we might prefer to use a request API (in line with
> > KIP-4)
> > > since that keeps tooling decoupled from the storage system and makes
> > access
> > > control easier. Along those lines, I've created KIP-40 to solve this
> > > problem by extending the GroupMetadata request (formerly known as the
> > > ConsumerMetadata request). Have a look and let me know what you think!
> > >
> > > KIP-40:
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-40%3A+GroupMetadata+request+enhancement
> > >
> > >
> > > Thanks,
> > > Jason
> >
>
>
>
> --
> Thanks,
> Neha
>



-- 

Regards,
Ashish


Build failed in Jenkins: kafka-trunk-jdk8 #69

2015-10-28 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2648: enforce non-empty group-ids in join-group request

--
[...truncated 1051 lines...]

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.RollingBounceTest > testRollingBounce PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithCollision PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokerListWithNoTopics PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testGetAllTopicMetadata 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testTopicMetadataRequest 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch PASSED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression PASSED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic PASSED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest PASSED

kafka.integration.SslTopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack 
PASSED

kafka.integration.SslTopicMetadataTest > testAutoCreateTopicWithCollision PASSED

kafka.integration.SslTopicMetadataTest > testAliveBrokerListWithNoTopics PASSED

kafka.integration.SslTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SslTopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.SslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.SslTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SslTopicMetadataTest > testTopicMetadataRequest PASSED

kafka.integration.SslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.FetcherTest > testFetcher PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
PASSED

kafka.integration.UncleanLeaderElectionTest > 
testCleanLeaderElectionDisabledByTopicOverride PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionDisabled 
PASSED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionInvalidTopicOverride PASSED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionEnabledByTopicOverride PASSED

kafka.zk.ZKEphemeralTest > testOverlappingSessions[0] PASSED

kafka.zk.ZKEphemeralTest > testEphemeralNodeCleanup[0] FAILED
org.I0Itec.zkclient.exception.ZkTimeoutException: Unable to connect to 
zookeeper server within timeout: 6000
at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:1183)
at org.I0Itec.zkclient.ZkClient.(ZkClient.java:147)
at org.I0Itec.zkclient.ZkClient.(ZkClient.java:122)
at kafka.utils.ZkUtils$.createZkClientAndConnection(ZkUtils.scala:89)
at kafka.utils.ZkUtils$.apply(ZkUtils.scala:71)
at 
kafka.zk.ZKEphemeralTest.testEphemeralNodeCleanup(ZKEphemeralTest.scala:79)

kafka.zk.ZKEphemeralTest > testZkWatchedEphemeral[0] PASSED

kafka.zk.ZKEphemeralTest > testSameSession[0] PASSED

kafka.zk.ZKEphemeralTest > testOverlappingSessions[1] PASSED

kafka.zk.ZKEphemeralTest > testEphemeralNodeCleanup[1] PASSED

kafka.zk.ZKEphemeralTest > testZkWatchedEphemeral[1] PASSED

kafka.zk.ZKEphemeralTest > testSameSession[1] PASSED

kafka.zk.ZKPathTest > testCreatePersistentSequentialThrowsException PASSED

kafka.zk.ZKPathTest > testCreatePersistentSequentialExists PASSED

kafka.zk.ZKPathTest > testCreateEphemeralPathExists PASSED

kafka.zk.ZKPathTest > testCreatePersistentPath PASSED

kafka.zk.ZKPathTest > testMakeSurePersistsPathExistsThrowsException PASSED

kafka.zk.ZKPathTest > testCreateEphemeralPathThrowsException PASSED

kafka.zk.ZKPathTest > testCreatePersistentPathThrowsException PASSED

kafka.zk.ZKPathTest > testMakeSurePersistsPathExists PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.message.MessageCompressionTest > testSimpleCompressDecompress PASSED

kafka.message.MessageWriterTest > testWithNoCompressionAttribute PASSED

kafka.message.MessageWriterTest > testWithCompressionAttribute PASSED

kafka.message.MessageWriterTest > testBufferingOutputStream PASSED


Re: [DISCUSS] KIP-40 - Extend GroupMetadata request to return group and member status

2015-10-28 Thread Jason Gustafson
Hey Neha,

Thanks for the feedback. I don't have a strong position on either of the
points you mentioned. If we're fine having an additional request type, then
maybe we could do something like this:

1. GroupMetadata accepts an array of groupIds and just returns the
coordinator for each group. An empty array can be used to get all groups
managed by the broker.
2. DescribeGroup takes a single groupId and returns all metadata including
the group state and the member states.

What do you think?

Thanks,
Jason


On Wed, Oct 28, 2015 at 1:20 PM, Neha Narkhede  wrote:

> I slightly prefer one of the rejected alternatives over the currently
> suggested one - which is to add a separate DescribeGroupRequest that always
> returns the member metadata for the group and any other information useful
> for monitoring and tooling. It helps keep the abstractions clean and also
> reduces the number of optional fields in the existing requests.
>
> Also, TopicMetadataRequest returns information for all topics if the
> requested topic is null. Is there a reason to handle this differently for
> consumer group metadata?
>
>
>
>
> On Wed, Oct 28, 2015 at 12:59 PM, Gwen Shapira  wrote:
>
> > Looks awesome to me :)
> >
> > This will allow to both list all groups and to retrieve offsets for
> > specific groups.
> >
> > Since 3 days passed with no comments, would you like to start a vote?
> >
> > On Sun, Oct 25, 2015 at 6:29 PM, Jason Gustafson 
> > wrote:
> > > Hi Kafka Devs,
> > >
> > > Currently, the new consumer provides no way to view a group's status
> > except
> > > by inspecting coordinator and consumer logs. This includes listing the
> > > members of the group and their partition assignments. For the old
> > consumer,
> > > tools could read this information directly from Zookeeper, but with
> > > persistence up in the air for the new consumer, that may not be
> possible.
> > > Even if it were, we might prefer to use a request API (in line with
> > KIP-4)
> > > since that keeps tooling decoupled from the storage system and makes
> > access
> > > control easier. Along those lines, I've created KIP-40 to solve this
> > > problem by extending the GroupMetadata request (formerly known as the
> > > ConsumerMetadata request). Have a look and let me know what you think!
> > >
> > > KIP-40:
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-40%3A+GroupMetadata+request+enhancement
> > >
> > >
> > > Thanks,
> > > Jason
> >
>
>
>
> --
> Thanks,
> Neha
>


[jira] [Updated] (KAFKA-2502) Quotas documentation for 0.8.3

2015-10-28 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2502:
---
Reviewer: Gwen Shapira  (was: Ismael Juma)

> Quotas documentation for 0.8.3
> --
>
> Key: KAFKA-2502
> URL: https://issues.apache.org/jira/browse/KAFKA-2502
> Project: Kafka
>  Issue Type: Task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>Priority: Blocker
>  Labels: quotas
> Fix For: 0.9.0.0
>
>
> Complete quotas documentation
> Also, 
> https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol
>  needs to be updated with protocol changes introduced in KAFKA-2136



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2688) Avoid forcing reload of `Configuration`

2015-10-28 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14979385#comment-14979385
 ] 

Ismael Juma commented on KAFKA-2688:


Flavio, assigning this to you as I have removed the SASL code and we now only 
need to remove it from `isZkSecurityEnabled`. I hope that's OK.

> Avoid forcing reload of `Configuration`
> ---
>
> Key: KAFKA-2688
> URL: https://issues.apache.org/jira/browse/KAFKA-2688
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Reporter: Ismael Juma
>Assignee: Flavio Junqueira
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> We currently call `Configuration.setConfiguration(null)` from a couple of 
> places in our codebase (`Login` and `JaasUtils`) to force `Configuration` to 
> be reloaded. If this code is removed, some tests can fail depending on the 
> test execution order.
> Ideally we would not need to call `setConfiguration(null)` outside of tests. 
> Investigate if this is possible. If not, we should at least ensure that 
> reloads are done in a safe way within our codebase (perhaps using a lock).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2644) Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL

2015-10-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14979392#comment-14979392
 ] 

ASF GitHub Bot commented on KAFKA-2644:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/361


> Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL
> 
>
> Key: KAFKA-2644
> URL: https://issues.apache.org/jira/browse/KAFKA-2644
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Rajini Sivaram
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> We need to define which of the existing ducktape tests are relevant. cc 
> [~rsivaram]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: Add new build target for system test li...

2015-10-28 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/361


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-2017) Persist Coordinator State for Coordinator Failover

2015-10-28 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2017:
---
Priority: Blocker  (was: Major)

> Persist Coordinator State for Coordinator Failover
> --
>
> Key: KAFKA-2017
> URL: https://issues.apache.org/jira/browse/KAFKA-2017
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: Onur Karaman
>Assignee: Guozhang Wang
>Priority: Blocker
> Fix For: 0.9.0.0
>
> Attachments: KAFKA-2017.patch, KAFKA-2017_2015-05-20_09:13:39.patch, 
> KAFKA-2017_2015-05-21_19:02:47.patch
>
>
> When a coordinator fails, the group membership protocol tries to failover to 
> a new coordinator without forcing all the consumers rejoin their groups. This 
> is possible if the coordinator persists its state so that the state can be 
> transferred during coordinator failover. This state consists of most of the 
> information in GroupRegistry and ConsumerRegistry.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2545) SSLConsumerTest.testSeek fails with JDK8u60

2015-10-28 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2545:
---
Fix Version/s: (was: 0.9.0.0)

> SSLConsumerTest.testSeek fails with JDK8u60
> ---
>
> Key: KAFKA-2545
> URL: https://issues.apache.org/jira/browse/KAFKA-2545
> Project: Kafka
>  Issue Type: Bug
>  Components: security
> Environment: OS X 10.11
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>Priority: Minor
>
> This fails consistently for me with JDK8u60, but passes with JDK7u80. I don't 
> know if this is a real problem with the implementation or just an issue with 
> the test, but we need to investigate before the release.
> Stacktrace follows:
> {code}
> java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.TimeoutException: Failed to update metadata 
> after 3000 ms.
>   at 
> org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.(KafkaProducer.java:639)
>   at 
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:406)
>   at 
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:297)
>   at kafka.api.SSLConsumerTest$$anonfun$1.apply(SSLConsumerTest.scala:212)
>   at kafka.api.SSLConsumerTest$$anonfun$1.apply(SSLConsumerTest.scala:211)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>   at scala.collection.immutable.Range.foreach(Range.scala:141)
>   at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
>   at scala.collection.AbstractTraversable.map(Traversable.scala:105)
>   at kafka.api.SSLConsumerTest.sendRecords(SSLConsumerTest.scala:211)
>   at kafka.api.SSLConsumerTest.testSeek(SSLConsumerTest.scala:144)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
>   at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:117)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:234)
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:74)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140)
> Caused by: org.apache.kafka.common.errors.TimeoutException: Failed to update 
> metadata after 3000 ms.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2502) Quotas documentation for 0.8.3

2015-10-28 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14979389#comment-14979389
 ] 

Ismael Juma commented on KAFKA-2502:


[~gwenshap], I assigned you as the reviewer as I will be away for a couple of 
weeks. I hope that's OK.

> Quotas documentation for 0.8.3
> --
>
> Key: KAFKA-2502
> URL: https://issues.apache.org/jira/browse/KAFKA-2502
> Project: Kafka
>  Issue Type: Task
>Reporter: Aditya Auradkar
>Assignee: Aditya Auradkar
>Priority: Blocker
>  Labels: quotas
> Fix For: 0.9.0.0
>
>
> Complete quotas documentation
> Also, 
> https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol
>  needs to be updated with protocol changes introduced in KAFKA-2136



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-40 - Extend GroupMetadata request to return group and member status

2015-10-28 Thread Ismael Juma
On Wed, Oct 28, 2015 at 10:25 PM, Jason Gustafson 
wrote:

> Hey Ashish,
>
> Yeah, that's fine with me too. I thought people kind of frowned upon the
> use of an empty topic list to get all topics, but perhaps it's more of an
> issue at the user API level.
>

Yes, empty list to represent "all" is quite error-prone. In fact, we have
one such bug in the authorization code in trunk right now (there is a PR
open with a fix though).

Ismael


[jira] [Commented] (KAFKA-2674) ConsumerRebalanceListener.onPartitionsRevoked() is not called on consumer close

2015-10-28 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14979433#comment-14979433
 ] 

Jay Kreps commented on KAFKA-2674:
--

[~hachikuji] I don't have a ton to add. I think I added that class, but it was 
mostly a placeholder not something with a well-thought-out rationale--I agree 
that the way it calls revoke prior to assign is a bit odd.

> ConsumerRebalanceListener.onPartitionsRevoked() is not called on consumer 
> close
> ---
>
> Key: KAFKA-2674
> URL: https://issues.apache.org/jira/browse/KAFKA-2674
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: Michal Turek
>Assignee: Jason Gustafson
>
> Hi, I'm investigating and testing behavior of new consumer from the planned 
> release 0.9 and found an inconsistency in calling of rebalance callbacks.
> I noticed that ConsumerRebalanceListener.onPartitionsRevoked() is NOT called 
> during consumer close and application shutdown. It's JavaDoc contract says:
> - "This method will be called before a rebalance operation starts and after 
> the consumer stops fetching data."
> - "It is recommended that offsets should be committed in this callback to 
> either Kafka or a custom offset store to prevent duplicate data."
> I believe calling consumer.close() is a start of rebalance operation and even 
> the local consumer that is actually closing should be notified to be able to 
> process any rebalance logic including offsets commit (e.g. if auto-commit is 
> disabled).
> There are commented logs of current and expected behaviors.
> {noformat}
> // Application start
> 2015-10-20 15:14:02.208 INFO  o.a.k.common.utils.AppInfoParser
> [TestConsumer-worker-0]: Kafka version : 0.9.0.0-SNAPSHOT 
> (AppInfoParser.java:82)
> 2015-10-20 15:14:02.208 INFO  o.a.k.common.utils.AppInfoParser
> [TestConsumer-worker-0]: Kafka commitId : 241b9ab58dcbde0c 
> (AppInfoParser.java:83)
> // Consumer started (the first one in group), rebalance callbacks are called 
> including empty onPartitionsRevoked()
> 2015-10-20 15:14:02.333 INFO  c.a.e.kafka.newapi.TestConsumer 
> [TestConsumer-worker-0]: Rebalance callback, revoked: [] 
> (TestConsumer.java:95)
> 2015-10-20 15:14:02.343 INFO  c.a.e.kafka.newapi.TestConsumer 
> [TestConsumer-worker-0]: Rebalance callback, assigned: [testB-1, testA-0, 
> testB-0, testB-3, testA-2, testB-2, testA-1, testA-4, testB-4, testA-3] 
> (TestConsumer.java:100)
> // Another consumer joined the group, rebalancing
> 2015-10-20 15:14:17.345 INFO  o.a.k.c.c.internals.Coordinator 
> [TestConsumer-worker-0]: Attempt to heart beat failed since the group is 
> rebalancing, try to re-join group. (Coordinator.java:714)
> 2015-10-20 15:14:17.346 INFO  c.a.e.kafka.newapi.TestConsumer 
> [TestConsumer-worker-0]: Rebalance callback, revoked: [testB-1, testA-0, 
> testB-0, testB-3, testA-2, testB-2, testA-1, testA-4, testB-4, testA-3] 
> (TestConsumer.java:95)
> 2015-10-20 15:14:17.349 INFO  c.a.e.kafka.newapi.TestConsumer 
> [TestConsumer-worker-0]: Rebalance callback, assigned: [testB-3, testA-4, 
> testB-4, testA-3] (TestConsumer.java:100)
> // Consumer started closing, there SHOULD be onPartitionsRevoked() callback 
> to commit offsets like during standard rebalance, but it is missing
> 2015-10-20 15:14:39.280 INFO  c.a.e.kafka.newapi.TestConsumer [main]: 
> Closing instance (TestConsumer.java:42)
> 2015-10-20 15:14:40.264 INFO  c.a.e.kafka.newapi.TestConsumer 
> [TestConsumer-worker-0]: Worker thread stopped (TestConsumer.java:89)
> {noformat}
> Workaround is to call onPartitionsRevoked() explicitly and manually just 
> before calling consumer.close() but it seems dirty and error prone for me. It 
> can be simply forgotten be someone without such experience.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2702) ConfigDef toHtmlTable() sorts in a way that is a bit confusing

2015-10-28 Thread Gwen Shapira (JIRA)
Gwen Shapira created KAFKA-2702:
---

 Summary: ConfigDef toHtmlTable() sorts in a way that is a bit 
confusing
 Key: KAFKA-2702
 URL: https://issues.apache.org/jira/browse/KAFKA-2702
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira


Because we put everything without default first (without prioritizing), 
critical  parameters get placed below low priority ones when they both have no 
defaults. Some parameters are without default and optional (SASL server in 
ConsumerConfig for instance).

Try printing ConsumerConfig parameters and see the mandatory group.id show up 
as #15.

I suggest sorting the no-default parameters by priority as well, or perhaps 
adding a "REQUIRED" category that gets printed first no matter what.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #68

2015-10-28 Thread Apache Jenkins Server
See 

Changes:

[cshapi] MINOR: KAFKA-2371 follow-up, DistributedHerder should wakeup

[junrao] KAFKA-2640; Add tests for ZK authentication

--
[...truncated 4858 lines...]

kafka.coordinator.MemberMetadataTest > testVoteRaisesOnNoSupportedProtocols 
PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.log.LogManagerTest > testCleanupSegmentsToMaintainSize PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithRelativeDirectory 
PASSED

kafka.log.LogManagerTest > testGetNonExistentLog PASSED

kafka.log.LogManagerTest > testTwoLogManagersUsingSameDirFails PASSED

kafka.log.LogManagerTest > testLeastLoadedAssignment PASSED

kafka.log.LogManagerTest > testCleanupExpiredSegments PASSED

kafka.log.LogManagerTest > testCheckpointRecoveryPoints PASSED

kafka.log.LogManagerTest > testTimeBasedFlush PASSED

kafka.log.LogManagerTest > testCreateLog PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.FileMessageSetTest > testTruncate PASSED

kafka.log.FileMessageSetTest > testIterationOverPartialAndTruncation PASSED

kafka.log.FileMessageSetTest > testRead PASSED

kafka.log.FileMessageSetTest > testFileSize PASSED

kafka.log.FileMessageSetTest > testIteratorWithLimits PASSED

kafka.log.FileMessageSetTest > testPreallocateTrue PASSED

kafka.log.FileMessageSetTest > testIteratorIsConsistent PASSED

kafka.log.FileMessageSetTest > testIterationDoesntChangePosition PASSED

kafka.log.FileMessageSetTest > testWrittenEqualsRead PASSED

kafka.log.FileMessageSetTest > testWriteTo PASSED

kafka.log.FileMessageSetTest > testPreallocateFalse PASSED

kafka.log.FileMessageSetTest > testPreallocateClearShutdown PASSED

kafka.log.FileMessageSetTest > testSearch PASSED

kafka.log.FileMessageSetTest > testSizeInBytes PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingTopic PASSED

kafka.log.LogTest > testIndexRebuild PASSED

kafka.log.LogTest > testLogRolls PASSED

kafka.log.LogTest > testMessageSizeCheck PASSED

kafka.log.LogTest > testAsyncDelete PASSED

kafka.log.LogTest > testReadOutOfRange PASSED

kafka.log.LogTest > testReadAtLogGap PASSED

kafka.log.LogTest > testTimeBasedLogRoll PASSED

kafka.log.LogTest > testLoadEmptyLog PASSED

kafka.log.LogTest > testMessageSetSizeCheck PASSED

kafka.log.LogTest > testIndexResizingAtTruncation PASSED

kafka.log.LogTest > testCompactedTopicConstraints PASSED


Re: [DISCUSS] KIP-40 - Extend GroupMetadata request to return group and member status

2015-10-28 Thread Jason Gustafson
Hey Ashish,

Yeah, that's fine with me too. I thought people kind of frowned upon the
use of an empty topic list to get all topics, but perhaps it's more of an
issue at the user API level.

-Jason

On Wed, Oct 28, 2015 at 2:58 PM, Ashish Singh  wrote:

> Jason, thanks for the great write up. I am overall in favor of changes
> suggested. However, I too think that there is no specific need of
> *IncludeAllGroups* flag, but that could be due to me not being aware of why
> this pattern is frowned upon for TopicMetadataRequest. To me it simply
> eases the use.
> ​
>
> On Wed, Oct 28, 2015 at 1:20 PM, Neha Narkhede  wrote:
>
> > I slightly prefer one of the rejected alternatives over the currently
> > suggested one - which is to add a separate DescribeGroupRequest that
> always
> > returns the member metadata for the group and any other information
> useful
> > for monitoring and tooling. It helps keep the abstractions clean and also
> > reduces the number of optional fields in the existing requests.
> >
> > Also, TopicMetadataRequest returns information for all topics if the
> > requested topic is null. Is there a reason to handle this differently for
> > consumer group metadata?
> >
> >
> >
> >
> > On Wed, Oct 28, 2015 at 12:59 PM, Gwen Shapira 
> wrote:
> >
> > > Looks awesome to me :)
> > >
> > > This will allow to both list all groups and to retrieve offsets for
> > > specific groups.
> > >
> > > Since 3 days passed with no comments, would you like to start a vote?
> > >
> > > On Sun, Oct 25, 2015 at 6:29 PM, Jason Gustafson 
> > > wrote:
> > > > Hi Kafka Devs,
> > > >
> > > > Currently, the new consumer provides no way to view a group's status
> > > except
> > > > by inspecting coordinator and consumer logs. This includes listing
> the
> > > > members of the group and their partition assignments. For the old
> > > consumer,
> > > > tools could read this information directly from Zookeeper, but with
> > > > persistence up in the air for the new consumer, that may not be
> > possible.
> > > > Even if it were, we might prefer to use a request API (in line with
> > > KIP-4)
> > > > since that keeps tooling decoupled from the storage system and makes
> > > access
> > > > control easier. Along those lines, I've created KIP-40 to solve this
> > > > problem by extending the GroupMetadata request (formerly known as the
> > > > ConsumerMetadata request). Have a look and let me know what you
> think!
> > > >
> > > > KIP-40:
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-40%3A+GroupMetadata+request+enhancement
> > > >
> > > >
> > > > Thanks,
> > > > Jason
> > >
> >
> >
> >
> > --
> > Thanks,
> > Neha
> >
>
>
>
> --
>
> Regards,
> Ashish
>


[jira] [Updated] (KAFKA-2688) Avoid forcing reload of `Configuration`

2015-10-28 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2688:
---
Assignee: Flavio Junqueira

> Avoid forcing reload of `Configuration`
> ---
>
> Key: KAFKA-2688
> URL: https://issues.apache.org/jira/browse/KAFKA-2688
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Reporter: Ismael Juma
>Assignee: Flavio Junqueira
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> We currently call `Configuration.setConfiguration(null)` from a couple of 
> places in our codebase (`Login` and `JaasUtils`) to force `Configuration` to 
> be reloaded. If this code is removed, some tests can fail depending on the 
> test execution order.
> Ideally we would not need to call `setConfiguration(null)` outside of tests. 
> Investigate if this is possible. If not, we should at least ensure that 
> reloads are done in a safe way within our codebase (perhaps using a lock).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2644) Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL

2015-10-28 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14979408#comment-14979408
 ] 

Ismael Juma commented on KAFKA-2644:


Yes, I think it would make sense to restrict the number of tests. Unlike SSL, 
SASL is only used during connection establishment and since we use long-lived 
connections, its effect on performance should be negligible.

> Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL
> 
>
> Key: KAFKA-2644
> URL: https://issues.apache.org/jira/browse/KAFKA-2644
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Rajini Sivaram
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> We need to define which of the existing ducktape tests are relevant. cc 
> [~rsivaram]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #70

2015-10-28 Thread Apache Jenkins Server
See 

Changes:

[junrao] MINOR: Add new build target for system test libs

--
[...truncated 144 lines...]

:78:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

  ^
:37:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 expireTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP) {

  ^
:262:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
if (offsetAndMetadata.commitTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

  ^
:293:
 a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
ControllerStats.uncleanLeaderElectionRate
^
:294:
 a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
ControllerStats.leaderElectionTimer
^
:380:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
if (value.expireTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

  ^
:115:
 value METADATA_FETCH_TIMEOUT_CONFIG in object ProducerConfig is deprecated: 
see corresponding Javadoc for more information.
props.put(ProducerConfig.METADATA_FETCH_TIMEOUT_CONFIG, 
config.metadataFetchTimeoutMs.toString)
 ^
:117:
 value TIMEOUT_CONFIG in object ProducerConfig is deprecated: see corresponding 
Javadoc for more information.
props.put(ProducerConfig.TIMEOUT_CONFIG, config.requestTimeoutMs.toString)
 ^
:121:
 value BLOCK_ON_BUFFER_FULL_CONFIG in object ProducerConfig is deprecated: see 
corresponding Javadoc for more information.
  props.put(ProducerConfig.BLOCK_ON_BUFFER_FULL_CONFIG, "false")
   ^
:75:
 value BLOCK_ON_BUFFER_FULL_CONFIG in object ProducerConfig is deprecated: see 
corresponding Javadoc for more information.
producerProps.put(ProducerConfig.BLOCK_ON_BUFFER_FULL_CONFIG, "true")
 ^
:194:
 value BLOCK_ON_BUFFER_FULL_CONFIG in object ProducerConfig is deprecated: see 
corresponding Javadoc for more information.
  maybeSetDefaultProperty(producerProps, 
ProducerConfig.BLOCK_ON_BUFFER_FULL_CONFIG, "true")
^
:234:
 method readLine in class DeprecatedConsole is deprecated: Use the method in 
scala.io.StdIn

[jira] [Commented] (KAFKA-2702) ConfigDef toHtmlTable() sorts in a way that is a bit confusing

2015-10-28 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14979461#comment-14979461
 ] 

Grant Henke commented on KAFKA-2702:


I can take this one if you are not planning to work on it Gwen.

> ConfigDef toHtmlTable() sorts in a way that is a bit confusing
> --
>
> Key: KAFKA-2702
> URL: https://issues.apache.org/jira/browse/KAFKA-2702
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>
> Because we put everything without default first (without prioritizing), 
> critical  parameters get placed below low priority ones when they both have 
> no defaults. Some parameters are without default and optional (SASL server in 
> ConsumerConfig for instance).
> Try printing ConsumerConfig parameters and see the mandatory group.id show up 
> as #15.
> I suggest sorting the no-default parameters by priority as well, or perhaps 
> adding a "REQUIRED" category that gets printed first no matter what.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2644) Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL

2015-10-28 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14979481#comment-14979481
 ] 

Jun Rao commented on KAFKA-2644:


[~rsivaram], for benchmark tests for SASL, probably just enable 
test_end_to_end_latency is enough.

Also, have you done any tests related to TGT renewal? Is it possible to do that 
in a unit test? Thanks,


> Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL
> 
>
> Key: KAFKA-2644
> URL: https://issues.apache.org/jira/browse/KAFKA-2644
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Rajini Sivaram
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> We need to define which of the existing ducktape tests are relevant. cc 
> [~rsivaram]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2417) Ducktape tests for SSL/TLS

2015-10-28 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14979506#comment-14979506
 ] 

Jun Rao commented on KAFKA-2417:


Some SSL tests have already been enabled in Ducktape in KAFKA-2581.

> Ducktape tests for SSL/TLS
> --
>
> Key: KAFKA-2417
> URL: https://issues.apache.org/jira/browse/KAFKA-2417
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Geoff Anderson
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> The tests should be complementary to the unit/integration tests written as 
> part of KAFKA-1685.
> Things to consider:
> * Upgrade/downgrade to turning on/off SSL
> * Failure testing
> * Expired/revoked certificates
> Some changes to ducktape may be required for upgrade scenarios.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #731

2015-10-28 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2648: enforce non-empty group-ids in join-group request

--
[...truncated 1303 lines...]
kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatIllegalGeneration 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaderFailureInSyncGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testGenerationIdIncrementsOnRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFromIllegalGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testInvalidGroupId PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatDuringRebalanceCausesRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentGroupProtocol PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooLarge PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooSmall PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetWithDefaultGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchangedLeaderShouldRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFollowerAfterLeader PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testCommitOffsetInAwaitingSync 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testJoinGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupFromUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentProtocolType PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetFromUnknownGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testLeaveGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerNewGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchangedFollowerDoesNotRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testValidJoinGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupLeaderAfterFollower PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupFromUnknownMember 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testValidLeaveGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupNotCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testValidHeartbeat PASSED

kafka.coordinator.CoordinatorMetadataTest > testGetGroup PASSED

kafka.coordinator.CoordinatorMetadataTest > 
testAddGroupReturnsPreexistingGroupIfItAlreadyExists PASSED

kafka.coordinator.CoordinatorMetadataTest > testRemoveNonexistentGroup PASSED

kafka.coordinator.CoordinatorMetadataTest > testGetNonexistentGroup PASSED

kafka.api.ProducerFailureHandlingTest > testCannotSendToInternalTopic PASSED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckOne PASSED

kafka.api.ProducerFailureHandlingTest > testWrongBrokerList PASSED

kafka.api.ProducerFailureHandlingTest > testNotEnoughReplicas PASSED

kafka.api.ProducerFailureHandlingTest > testNonExistentTopic PASSED

kafka.api.ProducerFailureHandlingTest > testInvalidPartition PASSED

kafka.api.ProducerFailureHandlingTest > testSendAfterClosed PASSED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckZero PASSED

kafka.api.ProducerFailureHandlingTest > 
testNotEnoughReplicasAfterBrokerShutdown PASSED

kafka.api.PlaintextProducerSendTest > testSerializerConstructors PASSED

kafka.api.PlaintextProducerSendTest > testClose PASSED

kafka.api.PlaintextProducerSendTest > testFlush PASSED

kafka.api.PlaintextProducerSendTest > testSendToPartition PASSED

kafka.api.PlaintextProducerSendTest > testSendOffset PASSED

kafka.api.PlaintextProducerSendTest > testAutoCreateTopic PASSED

kafka.api.PlaintextProducerSendTest > testCloseWithZeroTimeoutFromCallerThread 
PASSED

kafka.api.PlaintextProducerSendTest > testCloseWithZeroTimeoutFromSenderThread 
PASSED

kafka.api.PlaintextProducerSendTest > testWrongSerializer PASSED

kafka.api.ConsumerBounceTest > testSeekAndCommitWithBrokerFailures PASSED

kafka.api.ConsumerBounceTest > testConsumptionWithBrokerFailures 

Build failed in Jenkins: kafka-trunk-jdk7 #732

2015-10-28 Thread Apache Jenkins Server
See 

Changes:

[junrao] MINOR: Add new build target for system test libs

--
[...truncated 426 lines...]
if (!Console.readLine().equalsIgnoreCase("y")) {
 ^
:389:
 class BrokerEndPoint in object UpdateMetadataRequest is deprecated: see 
corresponding Javadoc for more information.
  new UpdateMetadataRequest.BrokerEndPoint(brokerEndPoint.id, 
brokerEndPoint.host, brokerEndPoint.port)
^
:391:
 constructor UpdateMetadataRequest in class UpdateMetadataRequest is 
deprecated: see corresponding Javadoc for more information.
new UpdateMetadataRequest(controllerId, controllerEpoch, 
liveBrokers.asJava, partitionStates.asJava)
^
:129:
 method readFromReadableChannel in class NetworkReceive is deprecated: see 
corresponding Javadoc for more information.
  response.readFromReadableChannel(channel)
   ^
there were 15 feature warnings; re-run with -feature for details
18 warnings found
:kafka-trunk-jdk7:core:processResources UP-TO-DATE
:kafka-trunk-jdk7:core:classes
:kafka-trunk-jdk7:log4j-appender:javadoc UP-TO-DATE
:kafka-trunk-jdk7:core:javadoc
:kafka-trunk-jdk7:core:javadocJar
:kafka-trunk-jdk7:core:scaladoc
[ant:scaladoc] Element 
' 
does not exist.
[ant:scaladoc] 
:293:
 warning: a pure expression does nothing in statement position; you may be 
omitting necessary parentheses
[ant:scaladoc] ControllerStats.uncleanLeaderElectionRate
[ant:scaladoc] ^
[ant:scaladoc] 
:294:
 warning: a pure expression does nothing in statement position; you may be 
omitting necessary parentheses
[ant:scaladoc] ControllerStats.leaderElectionTimer
[ant:scaladoc] ^
[ant:scaladoc] warning: there were 15 feature warnings; re-run with -feature 
for details
[ant:scaladoc] 
:28:
 warning: Could not find any member to link for "NoReplicaOnlineException".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:1160:
 warning: Could not find any member to link for "Exception".
[ant:scaladoc] /**
[ant:scaladoc] ^
[ant:scaladoc] 
:1334:
 warning: Could not find any member to link for "Exception".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:1293:
 warning: Could not find any member to link for "Exception".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:490:
 warning: Could not find any member to link for "Exception".
[ant:scaladoc] /**
[ant:scaladoc] ^
[ant:scaladoc] 
:455:
 warning: Could not find any member to link for "Exception".
[ant:scaladoc] /**
[ant:scaladoc] ^
[ant:scaladoc] 
:1276:
 warning: Could not find any member to link for "Exception".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:1250:
 warning: Could not find any member to link for "Exception".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:1438:
 warning: Could not find any member to link for "Exception".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 
:1415:
 warning: Could not find any member to link for "Exception".
[ant:scaladoc]   /**
[ant:scaladoc]   ^
[ant:scaladoc] 

[jira] [Commented] (KAFKA-2417) Ducktape tests for SSL/TLS

2015-10-28 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14979516#comment-14979516
 ] 

Ismael Juma commented on KAFKA-2417:


Maybe we should close this issue as it's not specific enough.

> Ducktape tests for SSL/TLS
> --
>
> Key: KAFKA-2417
> URL: https://issues.apache.org/jira/browse/KAFKA-2417
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Geoff Anderson
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> The tests should be complementary to the unit/integration tests written as 
> part of KAFKA-1685.
> Things to consider:
> * Upgrade/downgrade to turning on/off SSL
> * Failure testing
> * Expired/revoked certificates
> Some changes to ducktape may be required for upgrade scenarios.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2701) Consumer that uses Zookeeper to connect to Kafka broker, receives messages of server that is secured with SSL

2015-10-28 Thread Mohammad Abbasi (JIRA)
Mohammad Abbasi created KAFKA-2701:
--

 Summary: Consumer that uses Zookeeper to connect to Kafka broker, 
receives messages of server that is secured with SSL
 Key: KAFKA-2701
 URL: https://issues.apache.org/jira/browse/KAFKA-2701
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.9.0.0
Reporter: Mohammad Abbasi


I have a secured Kafka server with SSL+Authentication. Secured and 
authenticated consumers and producers work OK with this server and 
non-configured with SSL consumers and producer cannot send messages to or 
receive messages from secured Kafka server when they are connected "directly"(I 
mean not through the Zookeeper) to the server. 
But when non-authenticated consumer connects through Zookeeper to the broker, 
receives message from secured Kafka server. Is this a bug? or if it's OK, why 
non-authenticated consumer can receive messages from Kafka server which 
requires authentication through SSL?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2701) Consumer that uses Zookeeper to connect to Kafka broker, receives messages of server that is secured with SSL+Authentication

2015-10-28 Thread Mohammad Abbasi (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mohammad Abbasi updated KAFKA-2701:
---
Summary: Consumer that uses Zookeeper to connect to Kafka broker, receives 
messages of server that is secured with SSL+Authentication  (was: Consumer that 
uses Zookeeper to connect to Kafka broker, receives messages of server that is 
secured with SSL)

> Consumer that uses Zookeeper to connect to Kafka broker, receives messages of 
> server that is secured with SSL+Authentication
> 
>
> Key: KAFKA-2701
> URL: https://issues.apache.org/jira/browse/KAFKA-2701
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Mohammad Abbasi
>
> I have a secured Kafka server with SSL+Authentication. Secured and 
> authenticated consumers and producers work OK with this server and 
> non-configured with SSL consumers and producer cannot send messages to or 
> receive messages from secured Kafka server when they are connected 
> "directly"(I mean not through the Zookeeper) to the server. 
> But when non-authenticated consumer connects through Zookeeper to the broker, 
> receives message from secured Kafka server. Is this a bug? or if it's OK, why 
> non-authenticated consumer can receive messages from Kafka server which 
> requires authentication through SSL?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2701) Consumer that uses Zookeeper to connect to Kafka broker, receives messages of server that is secured with SSL+Authentication

2015-10-28 Thread Mohammad Abbasi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14978158#comment-14978158
 ] 

Mohammad Abbasi commented on KAFKA-2701:


I'm working on trunk branch.

> Consumer that uses Zookeeper to connect to Kafka broker, receives messages of 
> server that is secured with SSL+Authentication
> 
>
> Key: KAFKA-2701
> URL: https://issues.apache.org/jira/browse/KAFKA-2701
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Mohammad Abbasi
>
> I have a secured Kafka server with SSL+Authentication. Secured and 
> authenticated consumers and producers work OK with this server and 
> non-configured with SSL consumers and producer cannot send messages to or 
> receive messages from secured Kafka server when they are connected 
> "directly"(I mean not through the Zookeeper) to the server. 
> But when non-authenticated consumer connects through Zookeeper to the broker, 
> receives message from secured Kafka server. Is this a bug? or if it's OK, why 
> non-authenticated consumer can receive messages from Kafka server which 
> requires authentication through SSL?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2702) ConfigDef toHtmlTable() sorts in a way that is a bit confusing

2015-10-28 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14979554#comment-14979554
 ] 

Jay Kreps commented on KAFKA-2702:
--

Aren't things without default required? That rationale for that order was that 
effectively any required parameter is kind of it's own "essential" level of 
importance

> ConfigDef toHtmlTable() sorts in a way that is a bit confusing
> --
>
> Key: KAFKA-2702
> URL: https://issues.apache.org/jira/browse/KAFKA-2702
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>
> Because we put everything without default first (without prioritizing), 
> critical  parameters get placed below low priority ones when they both have 
> no defaults. Some parameters are without default and optional (SASL server in 
> ConsumerConfig for instance).
> Try printing ConsumerConfig parameters and see the mandatory group.id show up 
> as #15.
> I suggest sorting the no-default parameters by priority as well, or perhaps 
> adding a "REQUIRED" category that gets printed first no matter what.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2644) Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL

2015-10-28 Thread Rajini Sivaram (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14979580#comment-14979580
 ] 

Rajini Sivaram commented on KAFKA-2644:
---

[~junrao] Thank you, I will remove the other benchmark tests for SASL. 

I haven't added any SASL-specific tests under this task, only running existing 
ducktape tests. I had raised KAFKA-2692 for implementing tests for SASL that 
are not covered under these. [~ijuma] Are there unit tests for TGT renewal?

> Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL
> 
>
> Key: KAFKA-2644
> URL: https://issues.apache.org/jira/browse/KAFKA-2644
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Rajini Sivaram
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> We need to define which of the existing ducktape tests are relevant. cc 
> [~rsivaram]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2644) Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL

2015-10-28 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14979582#comment-14979582
 ] 

Ismael Juma commented on KAFKA-2644:


No, there aren't unit tests for TGT renewal as renewal is currently done by 
running `kinit -R`.

> Run relevant ducktape tests with SASL_PLAINTEXT and SASL_SSL
> 
>
> Key: KAFKA-2644
> URL: https://issues.apache.org/jira/browse/KAFKA-2644
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Rajini Sivaram
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> We need to define which of the existing ducktape tests are relevant. cc 
> [~rsivaram]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)