[jira] [Commented] (KAFKA-7789) SSL-related unit tests hang when run on Fedora 29

2019-01-07 Thread Tom Bentley (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16735729#comment-16735729
 ] 

Tom Bentley commented on KAFKA-7789:


This is caused by Fedora tightening up its system-wide crypto policies, as 
described here: https://fedoraproject.org/wiki/Changes/StrongCryptoSettings2. 
Their changes to {{/etc/crypto-policies/back-ends/java.config}} set 
{{jdk.certpath.disabledAlgorithms=MD2, MD5, DSA, RSA keySize < 2048}} thus 
causing the KeyManager to reject RSA keys with size < 2048bits. The rejection 
of the keys happens silently unless 
{{-Djavax.net.debug=ssl,handshake,keymanager}} system property is set. The 
{{TestSslUtils}} generates its keys with 1024 bit keys.

Fedora 29 users can change the policy to LEGACY with {{update-crypto-policies 
--set LEGACY}} as root, but this enables the LEGACY algorithm support 
system-wide. 
The better option would be to update the unit tests to use 2048 bit keys.

> SSL-related unit tests hang when run on Fedora 29
> -
>
> Key: KAFKA-7789
> URL: https://issues.apache.org/jira/browse/KAFKA-7789
> Project: Kafka
>  Issue Type: Bug
>Reporter: Tom Bentley
>Assignee: Tom Bentley
>Priority: Minor
>
> Various SSL-related unit tests (such as {{SslSelectorTest}}) hang when 
> executed on Fedora 29. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7789) SSL-related unit tests hang when run on Fedora 29

2019-01-07 Thread Tom Bentley (JIRA)
Tom Bentley created KAFKA-7789:
--

 Summary: SSL-related unit tests hang when run on Fedora 29
 Key: KAFKA-7789
 URL: https://issues.apache.org/jira/browse/KAFKA-7789
 Project: Kafka
  Issue Type: Bug
Reporter: Tom Bentley
Assignee: Tom Bentley


Various SSL-related unit tests (such as {{SslSelectorTest}}) hang when executed 
on Fedora 29. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6359) Work for KIP-236

2018-12-19 Thread Tom Bentley (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16724818#comment-16724818
 ] 

Tom Bentley commented on KAFKA-6359:


[~satish.duggana], [~sriharsha] asked me here and also out of band a couple of 
months ago about working on it. I said then that while it's something I intend 
to come back to it's not something I have time for right now, so he was welcome 
to work on it. I don't know if he's made any progress. So while it's fine with 
me it would be best check with him too.

> Work for KIP-236
> 
>
> Key: KAFKA-6359
> URL: https://issues.apache.org/jira/browse/KAFKA-6359
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Tom Bentley
>Assignee: Tom Bentley
>Priority: Minor
>
> This issue is for the work described in KIP-236.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-5692) Refactor PreferredReplicaLeaderElectionCommand to use AdminClient

2018-12-12 Thread Tom Bentley (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-5692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719212#comment-16719212
 ] 

Tom Bentley commented on KAFKA-5692:


[~junrao] I should have time in the next few weeks to work on this PR again.

> Refactor PreferredReplicaLeaderElectionCommand to use AdminClient
> -
>
> Key: KAFKA-5692
> URL: https://issues.apache.org/jira/browse/KAFKA-5692
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Tom Bentley
>Assignee: Tom Bentley
>Priority: Minor
>  Labels: kip, patch-available
> Fix For: 2.2.0
>
>
> The PreferredReplicaLeaderElectionCommand currently uses a direct connection 
> to zookeeper. The zookeeper dependency should be deprecated and an 
> AdminClient API created to be used instead. 
> This change will require a KIP.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6598) Kafka to support using ETCD beside Zookeeper

2018-03-08 Thread Tom Bentley (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16391312#comment-16391312
 ] 

Tom Bentley commented on KAFKA-6598:


[~cmccabe] any more info about that (such as when might the KIP be published?)

> Kafka to support using ETCD beside Zookeeper
> 
>
> Key: KAFKA-6598
> URL: https://issues.apache.org/jira/browse/KAFKA-6598
> Project: Kafka
>  Issue Type: New Feature
>  Components: clients, core
>Reporter: Sebastian Toader
>Priority: Major
>
> The current Kafka implementation is bound to {{Zookeeper}} to store its 
> metadata for forming a cluster of nodes (producer/consumer/broker). 
> As Kafka is becoming popular for streaming in various environments where 
> {{Zookeeper}} is either not easy to deploy/manage or there are better 
> alternatives to it there is a need 
> to run Kafka with other metastore implementation than {{Zookeeper}}.
> {{etcd}} can provide the same semantics as {{Zookeeper}} for Kafka and since 
> {{etcd}} is the favorable choice in certain environments (e.g. Kubernetes) 
> Kafka should be able to run with {{etcd}}.
> From the user's point of view should be straightforward to configure to use 
> {{etcd}} by just simply specifying a connection string that point to {{etcd}} 
> cluster.
> To avoid introducing instability the original interfaces should be kept and 
> only the low level {{Zookeeper}} API calls should be replaced with \{{etcd}} 
> API calls in case Kafka is configured 
> to use {{etcd}}.
> On the long run (which is out of scope of this jira) there should be an 
> abstract layer in Kafka which then various metastore implementations would 
> implement.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-2967) Move Kafka documentation to ReStructuredText

2018-02-14 Thread Tom Bentley (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16364006#comment-16364006
 ] 

Tom Bentley commented on KAFKA-2967:


I too would prefer asciidoc.

> Move Kafka documentation to ReStructuredText
> 
>
> Key: KAFKA-2967
> URL: https://issues.apache.org/jira/browse/KAFKA-2967
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
>Priority: Major
>
> Storing documentation as HTML is kind of BS :)
> * Formatting is a pain, and making it look good is even worse
> * Its just HTML, can't generate PDFs
> * Reading and editting is painful
> * Validating changes is hard because our formatting relies on all kinds of 
> Apache Server features.
> I suggest:
> * Move to RST
> * Generate HTML and PDF during build using Sphinx plugin for Gradle.
> Lots of Apache projects are doing this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6379) Work for KIP-240

2017-12-18 Thread Tom Bentley (JIRA)
Tom Bentley created KAFKA-6379:
--

 Summary: Work for KIP-240
 Key: KAFKA-6379
 URL: https://issues.apache.org/jira/browse/KAFKA-6379
 Project: Kafka
  Issue Type: Bug
Reporter: Tom Bentley
Assignee: Tom Bentley
Priority: Minor


This issue is for the work described in KIP-240.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6359) Work for KIP-236

2017-12-13 Thread Tom Bentley (JIRA)
Tom Bentley created KAFKA-6359:
--

 Summary: Work for KIP-236
 Key: KAFKA-6359
 URL: https://issues.apache.org/jira/browse/KAFKA-6359
 Project: Kafka
  Issue Type: Improvement
Reporter: Tom Bentley
Assignee: Tom Bentley
Priority: Minor


This issue is for the work described in KIP-236.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6283) Configuration of custom SCRAM SaslServer implementations

2017-12-11 Thread Tom Bentley (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16285958#comment-16285958
 ] 

Tom Bentley commented on KAFKA-6283:


See KIP-86. Closing this as it is essentially a dupe of 
https://issues.apache.org/jira/browse/KAFKA-4292

> Configuration of custom SCRAM SaslServer implementations
> 
>
> Key: KAFKA-6283
> URL: https://issues.apache.org/jira/browse/KAFKA-6283
> Project: Kafka
>  Issue Type: Bug
>Reporter: Tom Bentley
>Assignee: Tom Bentley
>Priority: Minor
>
> It is difficult to supply configuration information to a custom 
> {{SaslServer}} implementation when a SCRAM mechanism is used. 
> {{SaslServerAuthenticator.createSaslServer()}} creates a {{SaslServer}} for a 
> given mechanism. The call to {{Sasl.createSaslServer()}} passes the broker 
> config and a callback handler. In the case of a SCRAM mechanism the callback 
> handler is a {{ScramServerCallbackHandler}} which doesn't have access to the 
> {{jaasContext}}. This makes it hard to configure a such a {{SaslServer}} 
> because I can't supply custom keys to the broker config (any unknown ones get 
> removed) and I don't have access to the JAAS config.
> In the case of a non-SCRAM {{SaslServer}}, I at least have access to the JAAS 
> config via the {{SaslServerCallbackHandler}}.
> A simple way to solve this would be to pass the {{jaasContext}} to the 
> {{ScramServerCallbackHandler}} from where a custom {{SaslServerFactory}} 
> could retrieve it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-6283) Configuration of custom SCRAM SaslServer implementations

2017-12-11 Thread Tom Bentley (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley resolved KAFKA-6283.

Resolution: Duplicate

> Configuration of custom SCRAM SaslServer implementations
> 
>
> Key: KAFKA-6283
> URL: https://issues.apache.org/jira/browse/KAFKA-6283
> Project: Kafka
>  Issue Type: Bug
>Reporter: Tom Bentley
>Assignee: Tom Bentley
>Priority: Minor
>
> It is difficult to supply configuration information to a custom 
> {{SaslServer}} implementation when a SCRAM mechanism is used. 
> {{SaslServerAuthenticator.createSaslServer()}} creates a {{SaslServer}} for a 
> given mechanism. The call to {{Sasl.createSaslServer()}} passes the broker 
> config and a callback handler. In the case of a SCRAM mechanism the callback 
> handler is a {{ScramServerCallbackHandler}} which doesn't have access to the 
> {{jaasContext}}. This makes it hard to configure a such a {{SaslServer}} 
> because I can't supply custom keys to the broker config (any unknown ones get 
> removed) and I don't have access to the JAAS config.
> In the case of a non-SCRAM {{SaslServer}}, I at least have access to the JAAS 
> config via the {{SaslServerCallbackHandler}}.
> A simple way to solve this would be to pass the {{jaasContext}} to the 
> {{ScramServerCallbackHandler}} from where a custom {{SaslServerFactory}} 
> could retrieve it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6272) SASL PLAIN and SCRAM do not apply SASLPrep

2017-12-01 Thread Tom Bentley (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16274241#comment-16274241
 ] 

Tom Bentley commented on KAFKA-6272:


To fix this we would have to apply SASLPrep to usernames and password for SASL 
Plain and SCRAM. Doing that could deny authentication to existing users because:

1. SASLPrep-ing the password before computing client proof (in the SASL client) 
will mean the client produces a different proof when the output of SASLPrep 
differs from the input. Using an upgraded broker it insufficient because the 
password hashes in ZooKeeper will have been computed using unprepped passwords. 
2. Likewise, if a new user account is added (which preps the password) then an 
old client won't authenticate.

To address this we would need to change how the passwords were stored in 
zookeeper to include two hashes -- one computed from a prepped password and 
another from an unprepped passwork. We could then use the version of the 
APIVERSION request from the client to determine which stored hash to use when 
performing SASL SCRAM authentication. 

At some time in the future, when we no longer support clients which don't do 
the SASLpreping we would stop needing both kinds of hash in ZooKeeper. 

All of which is basically a long-winded way of saying this will require a KIP.

> SASL PLAIN and SCRAM do not apply SASLPrep
> --
>
> Key: KAFKA-6272
> URL: https://issues.apache.org/jira/browse/KAFKA-6272
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Tom Bentley
>Assignee: Tom Bentley
>Priority: Minor
>
> [RFC 5802|https://tools.ietf.org/html/rfc5802] (SASL SCRAM) says:
> {quote}
> Before sending the username to the server, the client SHOULD
> prepare the username using the "SASLprep" profile [RFC4013] of
> the "stringprep" algorithm [RFC3454] treating it as a query
> string (i.e., unassigned Unicode code points are allowed).
> {quote}
> ScramSaslClient uses ScramFormatter.normalize(), which just UTF-8 encodes the 
> bytes.
> Likewise [RFC 4616|https://tools.ietf.org/html/rfc4616] (SASL PLAIN) says:
> {quote}
> The presented authentication identity and password strings, as well
> as the database authentication identity and password strings, are to
> be prepared before being used in the verification process.  The
> [SASLPrep] profile of the [StringPrep] algorithm is the RECOMMENDED
> preparation algorithm. The SASLprep preparation algorithm is recommended to 
> improve the likelihood that comparisons behave in an expected manner.  The 
> SASLprep preparation algorithm is not mandatory so as to allow the server to 
> employ other preparation algorithms (including none) when appropriate.  For 
> instance, use of a different preparation algorithm may be necessary for the 
> server to interoperate with an external system.
> {quote}
> But the comparison is simply on the bare strings.
> This doesn't cause problems with the SASL components distributed with Kafka 
> (because they consistently don't do any string preparation), but it makes it 
> harder to, for, example, use the Kafka {{SaslClients}} on clients, but 
> configure a different {{SaslServer}} on brokers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6283) Configuration of custom SCRAM SaslServer implementations

2017-11-29 Thread Tom Bentley (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16271003#comment-16271003
 ] 

Tom Bentley commented on KAFKA-6283:


[~ijuma], [~rsivaram] this is a very minor change, but I suppose it would still 
require a KIP?

> Configuration of custom SCRAM SaslServer implementations
> 
>
> Key: KAFKA-6283
> URL: https://issues.apache.org/jira/browse/KAFKA-6283
> Project: Kafka
>  Issue Type: Bug
>Reporter: Tom Bentley
>Assignee: Tom Bentley
>Priority: Minor
>
> It is difficult to supply configuration information to a custom 
> {{SaslServer}} implementation when a SCRAM mechanism is used. 
> {{SaslServerAuthenticator.createSaslServer()}} creates a {{SaslServer}} for a 
> given mechanism. The call to {{Sasl.createSaslServer()}} passes the broker 
> config and a callback handler. In the case of a SCRAM mechanism the callback 
> handler is a {{ScramServerCallbackHandler}} which doesn't have access to the 
> {{jaasContext}}. This makes it hard to configure a such a {{SaslServer}} 
> because I can't supply custom keys to the broker config (any unknown ones get 
> removed) and I don't have access to the JAAS config.
> In the case of a non-SCRAM {{SaslServer}}, I at least have access to the JAAS 
> config via the {{SaslServerCallbackHandler}}.
> A simple way to solve this would be to pass the {{jaasContext}} to the 
> {{ScramServerCallbackHandler}} from where a custom {{SaslServerFactory}} 
> could retrieve it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6283) Configuration of custom SCRAM SaslServer implementations

2017-11-29 Thread Tom Bentley (JIRA)
Tom Bentley created KAFKA-6283:
--

 Summary: Configuration of custom SCRAM SaslServer implementations
 Key: KAFKA-6283
 URL: https://issues.apache.org/jira/browse/KAFKA-6283
 Project: Kafka
  Issue Type: Bug
Reporter: Tom Bentley
Assignee: Tom Bentley
Priority: Minor


It is difficult to supply configuration information to a custom {{SaslServer}} 
implementation when a SCRAM mechanism is used. 

{{SaslServerAuthenticator.createSaslServer()}} creates a {{SaslServer}} for a 
given mechanism. The call to {{Sasl.createSaslServer()}} passes the broker 
config and a callback handler. In the case of a SCRAM mechanism the callback 
handler is a {{ScramServerCallbackHandler}} which doesn't have access to the 
{{jaasContext}}. This makes it hard to configure a such a {{SaslServer}} 
because I can't supply custom keys to the broker config (any unknown ones get 
removed) and I don't have access to the JAAS config.

In the case of a non-SCRAM {{SaslServer}}, I at least have access to the JAAS 
config via the {{SaslServerCallbackHandler}}.

A simple way to solve this would be to pass the {{jaasContext}} to the 
{{ScramServerCallbackHandler}} from where a custom {{SaslServerFactory}} could 
retrieve it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-6272) SASL PLAIN and SCRAM do not apply SASLPrep

2017-11-27 Thread Tom Bentley (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley updated KAFKA-6272:
---
Description: 
[RFC 5802|https://tools.ietf.org/html/rfc5802] (SASL SCRAM) says:

{quote}
Before sending the username to the server, the client SHOULD
prepare the username using the "SASLprep" profile [RFC4013] of
the "stringprep" algorithm [RFC3454] treating it as a query
string (i.e., unassigned Unicode code points are allowed).
{quote}

ScramSaslClient uses ScramFormatter.normalize(), which just UTF-8 encodes the 
bytes.

Likewise [RFC 4616|https://tools.ietf.org/html/rfc4616] (SASL PLAIN) says:

{quote}
The presented authentication identity and password strings, as well
as the database authentication identity and password strings, are to
be prepared before being used in the verification process.  The
[SASLPrep] profile of the [StringPrep] algorithm is the RECOMMENDED
preparation algorithm. The SASLprep preparation algorithm is recommended to 
improve the likelihood that comparisons behave in an expected manner.  The 
SASLprep preparation algorithm is not mandatory so as to allow the server to 
employ other preparation algorithms (including none) when appropriate.  For 
instance, use of a different preparation algorithm may be necessary for the 
server to interoperate with an external system.
{quote}

But the comparison is simply on the bare strings.

This doesn't cause problems with the SASL components distributed with Kafka 
(because they consistently don't do any string preparation), but it makes it 
harder to, for, example, use the Kafka {{SaslClients}} on clients, but 
configure a different {{SaslServer}} on brokers.

  was:
[RFC 5802|https://tools.ietf.org/html/rfc5802] (SASL SCRAM) says:

{quote}
Before sending the username to the server, the client SHOULD
prepare the username using the "SASLprep" profile [RFC4013] of
the "stringprep" algorithm [RFC3454] treating it as a query
string (i.e., unassigned Unicode code points are allowed).
{quote}

ScramSaslClient uses ScramFormatter.normalize(), which just UTF-8 encodes the 
bytes.

Likewise [RFC 4616|https://tools.ietf.org/html/rfc4616] (SASL PLAIN) says:

{quote}
The presented authentication identity and password strings, as well
as the database authentication identity and password strings, are to
be prepared before being used in the verification process.  The
[SASLPrep] profile of the [StringPrep] algorithm is the RECOMMENDED
preparation algorithm. The SASLprep preparation algorithm is recommended to 
improve the likelihood that comparisons behave in an expected manner.  The 
SASLprep preparation algorithm is not mandatory so as to allow the server to 
employ other preparation algorithms (including none) when appropriate.  For 
instance, use of a different preparation algorithm may be necessary for the 
server to interoperate with an external system.
{quote}

But the comparison is simply on the bare strings.

This doesn't cause problems with the SASL components distributed with Kafka 
(because they consistently don't do any string preparation), but it makes it 
harder to, for, example, use the Kafka {{SaslClient}}s on clients, but 
configure a different {{SaslServer}} on brokers.


> SASL PLAIN and SCRAM do not apply SASLPrep
> --
>
> Key: KAFKA-6272
> URL: https://issues.apache.org/jira/browse/KAFKA-6272
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Tom Bentley
>Assignee: Tom Bentley
>Priority: Minor
>
> [RFC 5802|https://tools.ietf.org/html/rfc5802] (SASL SCRAM) says:
> {quote}
> Before sending the username to the server, the client SHOULD
> prepare the username using the "SASLprep" profile [RFC4013] of
> the "stringprep" algorithm [RFC3454] treating it as a query
> string (i.e., unassigned Unicode code points are allowed).
> {quote}
> ScramSaslClient uses ScramFormatter.normalize(), which just UTF-8 encodes the 
> bytes.
> Likewise [RFC 4616|https://tools.ietf.org/html/rfc4616] (SASL PLAIN) says:
> {quote}
> The presented authentication identity and password strings, as well
> as the database authentication identity and password strings, are to
> be prepared before being used in the verification process.  The
> [SASLPrep] profile of the [StringPrep] algorithm is the RECOMMENDED
> preparation algorithm. The SASLprep preparation algorithm is recommended to 
> improve the likelihood that comparisons behave in an expected manner.  The 
> SASLprep preparation algorithm is not mandatory so as to allow the server to 
> employ other preparation algorithms (including none) when appropriate.  For 
> instance, use of a different preparation algorithm may be necessary for the 
> server to interoperate with an external system.
> {quote}
> But the comparison is simply on the bare strings.
> This doesn't cause 

[jira] [Created] (KAFKA-6272) SASL PLAIN and SCRAM do not apply SASLPrep

2017-11-27 Thread Tom Bentley (JIRA)
Tom Bentley created KAFKA-6272:
--

 Summary: SASL PLAIN and SCRAM do not apply SASLPrep
 Key: KAFKA-6272
 URL: https://issues.apache.org/jira/browse/KAFKA-6272
 Project: Kafka
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Tom Bentley
Assignee: Tom Bentley
Priority: Minor


[RFC 5802|https://tools.ietf.org/html/rfc5802] (SASL SCRAM) says:

{quote}
Before sending the username to the server, the client SHOULD
prepare the username using the "SASLprep" profile [RFC4013] of
the "stringprep" algorithm [RFC3454] treating it as a query
string (i.e., unassigned Unicode code points are allowed).
{quote}

ScramSaslClient uses ScramFormatter.normalize(), which just UTF-8 encodes the 
bytes.

Likewise [RFC 4616|https://tools.ietf.org/html/rfc4616] (SASL PLAIN) says:

{quote}
The presented authentication identity and password strings, as well
as the database authentication identity and password strings, are to
be prepared before being used in the verification process.  The
[SASLPrep] profile of the [StringPrep] algorithm is the RECOMMENDED
preparation algorithm. The SASLprep preparation algorithm is recommended to 
improve the likelihood that comparisons behave in an expected manner.  The 
SASLprep preparation algorithm is not mandatory so as to allow the server to 
employ other preparation algorithms (including none) when appropriate.  For 
instance, use of a different preparation algorithm may be necessary for the 
server to interoperate with an external system.
{quote}

But the comparison is simply on the bare strings.

This doesn't cause problems with the SASL components distributed with Kafka 
(because they consistently don't do any string preparation), but it makes it 
harder to, for, example, use the Kafka {{SaslClient}}s on clients, but 
configure a different {{SaslServer}} on brokers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6251) Update kafka-configs.sh to use the new AdminClient

2017-11-21 Thread Tom Bentley (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16261042#comment-16261042
 ] 

Tom Bentley commented on KAFKA-6251:


This is a dupe of https://issues.apache.org/jira/browse/KAFKA-5561, I think

> Update kafka-configs.sh to use the new AdminClient
> --
>
> Key: KAFKA-6251
> URL: https://issues.apache.org/jira/browse/KAFKA-6251
> Project: Kafka
>  Issue Type: New Feature
>  Components: tools
>Reporter: Rajini Sivaram
> Fix For: 1.1.0
>
>
> The tool {{kafka-configs.sh}} that is used to describe/update dynamic 
> configuration options (topic/quota etc.) currently updates configs directly 
> in ZooKeeper. We should switch this to using the new AdminClient so that 
> updates can be validated and secured without access to ZK. 
> This needs a KIP since command line options will need to change.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6143) VerifiableProducer & VerifiableConsumer need tests

2017-10-30 Thread Tom Bentley (JIRA)
Tom Bentley created KAFKA-6143:
--

 Summary: VerifiableProducer & VerifiableConsumer need tests
 Key: KAFKA-6143
 URL: https://issues.apache.org/jira/browse/KAFKA-6143
 Project: Kafka
  Issue Type: Bug
Reporter: Tom Bentley
Priority: Minor


The {{VerifiableProducer}} and {{VerifiableConsumer}} used use for system 
tests, but don't have any tests themselves. They should have.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-6130) VerifiableConsumer with --max-messages doesn't exit

2017-10-26 Thread Tom Bentley (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley updated KAFKA-6130:
---
Summary: VerifiableConsumer with --max-messages doesn't exit  (was: 
VerifiableConsume with --max-messages )

> VerifiableConsumer with --max-messages doesn't exit
> ---
>
> Key: KAFKA-6130
> URL: https://issues.apache.org/jira/browse/KAFKA-6130
> Project: Kafka
>  Issue Type: Bug
>Reporter: Tom Bentley
>Assignee: Tom Bentley
>Priority: Minor
>
> If I run {{kafka-verifiable-consumer.sh --max-messages=N}} I expect the tool 
> to consume N messages and then exit. It will actually consume as many 
> messages as are in the topic and then block.
> The problem is that although  the max messages will cause the loop in 
> onRecordsReceived() to break, the loop in run() will just call 
> onRecordsReceived() again.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6130) VerifiableConsume with --max-messages

2017-10-26 Thread Tom Bentley (JIRA)
Tom Bentley created KAFKA-6130:
--

 Summary: VerifiableConsume with --max-messages 
 Key: KAFKA-6130
 URL: https://issues.apache.org/jira/browse/KAFKA-6130
 Project: Kafka
  Issue Type: Bug
Reporter: Tom Bentley
Assignee: Tom Bentley
Priority: Minor


If I run {{kafka-verifiable-consumer.sh --max-messages=N}} I expect the tool to 
consume N messages and then exit. It will actually consume as many messages as 
are in the topic and then block.

The problem is that although  the max messages will cause the loop in 
onRecordsReceived() to break, the loop in run() will just call 
onRecordsReceived() again.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5693) TopicCreationPolicy and AlterConfigsPolicy overlap

2017-10-24 Thread Tom Bentley (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley updated KAFKA-5693:
---
Labels: kip  (was: )

> TopicCreationPolicy and AlterConfigsPolicy overlap
> --
>
> Key: KAFKA-5693
> URL: https://issues.apache.org/jira/browse/KAFKA-5693
> Project: Kafka
>  Issue Type: Bug
>Reporter: Tom Bentley
>Priority: Minor
>  Labels: kip
> Fix For: 1.1.0
>
>
> The administrator of a cluster can configure a {{CreateTopicPolicy}}, which 
> has access to the topic configs as well as other metadata to make its 
> decision about whether a topic creation is allowed. Thus in theory the 
> decision could be based on a combination of of the replication factor, and 
> the topic configs, for example. 
> Separately there is an AlterConfigPolicy, which only has access to the 
> configs (and can apply to configurable entities other than just topics).
> There are potential issues with this. For example although the 
> CreateTopicPolicy is checked at creation time, it's not checked for any later 
> alterations to the topic config. So policies which depend on both the topic 
> configs and other topic metadata could be worked around by changing the 
> configs after creation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5693) TopicCreationPolicy and AlterConfigsPolicy overlap

2017-10-24 Thread Tom Bentley (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley updated KAFKA-5693:
---
Fix Version/s: 1.1.0

> TopicCreationPolicy and AlterConfigsPolicy overlap
> --
>
> Key: KAFKA-5693
> URL: https://issues.apache.org/jira/browse/KAFKA-5693
> Project: Kafka
>  Issue Type: Bug
>Reporter: Tom Bentley
>Priority: Minor
> Fix For: 1.1.0
>
>
> The administrator of a cluster can configure a {{CreateTopicPolicy}}, which 
> has access to the topic configs as well as other metadata to make its 
> decision about whether a topic creation is allowed. Thus in theory the 
> decision could be based on a combination of of the replication factor, and 
> the topic configs, for example. 
> Separately there is an AlterConfigPolicy, which only has access to the 
> configs (and can apply to configurable entities other than just topics).
> There are potential issues with this. For example although the 
> CreateTopicPolicy is checked at creation time, it's not checked for any later 
> alterations to the topic config. So policies which depend on both the topic 
> configs and other topic metadata could be worked around by changing the 
> configs after creation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-6046) DeleteRecordsRequest to a non-leader

2017-10-12 Thread Tom Bentley (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley reassigned KAFKA-6046:
--

Assignee: Ted Yu

> DeleteRecordsRequest to a non-leader
> 
>
> Key: KAFKA-6046
> URL: https://issues.apache.org/jira/browse/KAFKA-6046
> Project: Kafka
>  Issue Type: Bug
>Reporter: Tom Bentley
>Assignee: Ted Yu
> Fix For: 1.1.0
>
>
> When a `DeleteRecordsRequest` is sent to a broker that's not the leader for 
> the partition the  `DeleteRecordsResponse` returns 
> `UNKNOWN_TOPIC_OR_PARTITION`. This is ambiguous (does the topic not exist on 
> any broker, or did we just sent the request to the wrong broker?), and 
> inconsistent (a `ProduceRequest` would return `NOT_LEADER_FOR_PARTITION`).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-6050) --entity-name should print error message

2017-10-12 Thread Tom Bentley (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley reassigned KAFKA-6050:
--

Assignee: Tom Bentley

> --entity-name  should print error message
> --
>
> Key: KAFKA-6050
> URL: https://issues.apache.org/jira/browse/KAFKA-6050
> Project: Kafka
>  Issue Type: Bug
>Reporter: Tom Bentley
>Assignee: Tom Bentley
>
> The command to describe the default topic config
> {noformat}
> bin/kafka-configs.sh --zookeeper localhost:2181 \
>   --describe --entity-type topics --entity-name ''
> {noformat}
> returns without error, but the equivalent command to alter the default topic 
> config:
> {noformat}
> bin/kafka-configs.sh --zookeeper localhost:2181 --alter \
>   --entity-type topics --entity-name '' --add-config 
> retention.ms=1000
> {noformat}
> returns an error:
> {noformat}
> Error while executing config command Topic name "" is illegal, it 
> contains a character other than ASCII alphanumerics, '.', '_' and '-'
> org.apache.kafka.common.errors.InvalidTopicException: Topic name "" 
> is illegal, it contains a character other than ASCII alphanumerics, '.', '_' 
> and '-'
>   at org.apache.kafka.common.internals.Topic.validate(Topic.java:45)
>   at kafka.admin.AdminUtils$.validateTopicConfig(AdminUtils.scala:578)
>   at kafka.admin.AdminUtils$.changeTopicConfig(AdminUtils.scala:595)
>   at kafka.admin.AdminUtilities$class.changeConfigs(AdminUtils.scala:52)
>   at kafka.admin.AdminUtils$.changeConfigs(AdminUtils.scala:63)
>   at kafka.admin.ConfigCommand$.alterConfig(ConfigCommand.scala:103)
>   at kafka.admin.ConfigCommand$.main(ConfigCommand.scala:70)
>   at kafka.admin.ConfigCommand.main(ConfigCommand.scala)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6050) --entity-name should print error message

2017-10-12 Thread Tom Bentley (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201667#comment-16201667
 ] 

Tom Bentley commented on KAFKA-6050:


It appears that {{--entity-default}} with topics doesn't work. 

{noformat}
$ bin/kafka-configs.sh --zookeeper localhost:2181 --alter \
>   --entity-type topics --entity-default --add-config retention.ms=1000
Exception in thread "main" java.lang.IllegalArgumentException: --entity-name 
must be specified with --alter of Buffer(topics)
at 
kafka.admin.ConfigCommand$ConfigCommandOptions.checkArgs(ConfigCommand.scala:331)
at kafka.admin.ConfigCommand$.main(ConfigCommand.scala:61)
at kafka.admin.ConfigCommand.main(ConfigCommand.scala)
{noformat>

> --entity-name  should print error message
> --
>
> Key: KAFKA-6050
> URL: https://issues.apache.org/jira/browse/KAFKA-6050
> Project: Kafka
>  Issue Type: Bug
>Reporter: Tom Bentley
>
> The command to describe the default topic config
> {noformat}
> bin/kafka-configs.sh --zookeeper localhost:2181 \
>   --describe --entity-type topics --entity-name ''
> {noformat}
> returns without error, but the equivalent command to alter the default topic 
> config:
> {noformat}
> bin/kafka-configs.sh --zookeeper localhost:2181 --alter \
>   --entity-type topics --entity-name '' --add-config 
> retention.ms=1000
> {noformat}
> returns an error:
> {noformat}
> Error while executing config command Topic name "" is illegal, it 
> contains a character other than ASCII alphanumerics, '.', '_' and '-'
> org.apache.kafka.common.errors.InvalidTopicException: Topic name "" 
> is illegal, it contains a character other than ASCII alphanumerics, '.', '_' 
> and '-'
>   at org.apache.kafka.common.internals.Topic.validate(Topic.java:45)
>   at kafka.admin.AdminUtils$.validateTopicConfig(AdminUtils.scala:578)
>   at kafka.admin.AdminUtils$.changeTopicConfig(AdminUtils.scala:595)
>   at kafka.admin.AdminUtilities$class.changeConfigs(AdminUtils.scala:52)
>   at kafka.admin.AdminUtils$.changeConfigs(AdminUtils.scala:63)
>   at kafka.admin.ConfigCommand$.alterConfig(ConfigCommand.scala:103)
>   at kafka.admin.ConfigCommand$.main(ConfigCommand.scala:70)
>   at kafka.admin.ConfigCommand.main(ConfigCommand.scala)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (KAFKA-6050) --entity-name should print error message

2017-10-12 Thread Tom Bentley (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16201667#comment-16201667
 ] 

Tom Bentley edited comment on KAFKA-6050 at 10/12/17 9:09 AM:
--

It appears that {{--entity-default}} with topics doesn't work. 


{noformat}
$ bin/kafka-configs.sh --zookeeper localhost:2181 --alter \
>   --entity-type topics --entity-default --add-config retention.ms=1000
Exception in thread "main" java.lang.IllegalArgumentException: --entity-name 
must be specified with --alter of Buffer(topics)
at 
kafka.admin.ConfigCommand$ConfigCommandOptions.checkArgs(ConfigCommand.scala:331)
at kafka.admin.ConfigCommand$.main(ConfigCommand.scala:61)
at kafka.admin.ConfigCommand.main(ConfigCommand.scala)

{noformat}




was (Author: tombentley):
It appears that {{--entity-default}} with topics doesn't work. 

{noformat}
$ bin/kafka-configs.sh --zookeeper localhost:2181 --alter \
>   --entity-type topics --entity-default --add-config retention.ms=1000
Exception in thread "main" java.lang.IllegalArgumentException: --entity-name 
must be specified with --alter of Buffer(topics)
at 
kafka.admin.ConfigCommand$ConfigCommandOptions.checkArgs(ConfigCommand.scala:331)
at kafka.admin.ConfigCommand$.main(ConfigCommand.scala:61)
at kafka.admin.ConfigCommand.main(ConfigCommand.scala)
{noformat>

> --entity-name  should print error message
> --
>
> Key: KAFKA-6050
> URL: https://issues.apache.org/jira/browse/KAFKA-6050
> Project: Kafka
>  Issue Type: Bug
>Reporter: Tom Bentley
>
> The command to describe the default topic config
> {noformat}
> bin/kafka-configs.sh --zookeeper localhost:2181 \
>   --describe --entity-type topics --entity-name ''
> {noformat}
> returns without error, but the equivalent command to alter the default topic 
> config:
> {noformat}
> bin/kafka-configs.sh --zookeeper localhost:2181 --alter \
>   --entity-type topics --entity-name '' --add-config 
> retention.ms=1000
> {noformat}
> returns an error:
> {noformat}
> Error while executing config command Topic name "" is illegal, it 
> contains a character other than ASCII alphanumerics, '.', '_' and '-'
> org.apache.kafka.common.errors.InvalidTopicException: Topic name "" 
> is illegal, it contains a character other than ASCII alphanumerics, '.', '_' 
> and '-'
>   at org.apache.kafka.common.internals.Topic.validate(Topic.java:45)
>   at kafka.admin.AdminUtils$.validateTopicConfig(AdminUtils.scala:578)
>   at kafka.admin.AdminUtils$.changeTopicConfig(AdminUtils.scala:595)
>   at kafka.admin.AdminUtilities$class.changeConfigs(AdminUtils.scala:52)
>   at kafka.admin.AdminUtils$.changeConfigs(AdminUtils.scala:63)
>   at kafka.admin.ConfigCommand$.alterConfig(ConfigCommand.scala:103)
>   at kafka.admin.ConfigCommand$.main(ConfigCommand.scala:70)
>   at kafka.admin.ConfigCommand.main(ConfigCommand.scala)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-6050) --entity-name should print error message

2017-10-11 Thread Tom Bentley (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley updated KAFKA-6050:
---
Summary: --entity-name  should print error message  (was: Cannot 
alter default topic config)

> --entity-name  should print error message
> --
>
> Key: KAFKA-6050
> URL: https://issues.apache.org/jira/browse/KAFKA-6050
> Project: Kafka
>  Issue Type: Bug
>Reporter: Tom Bentley
>
> The command to describe the default topic config
> {noformat}
> bin/kafka-configs.sh --zookeeper localhost:2181 \
>   --describe --entity-type topics --entity-name ''
> {noformat}
> returns without error, but the equivalent command to alter the default topic 
> config:
> {noformat}
> bin/kafka-configs.sh --zookeeper localhost:2181 --alter \
>   --entity-type topics --entity-name '' --add-config 
> retention.ms=1000
> {noformat}
> returns an error:
> {noformat}
> Error while executing config command Topic name "" is illegal, it 
> contains a character other than ASCII alphanumerics, '.', '_' and '-'
> org.apache.kafka.common.errors.InvalidTopicException: Topic name "" 
> is illegal, it contains a character other than ASCII alphanumerics, '.', '_' 
> and '-'
>   at org.apache.kafka.common.internals.Topic.validate(Topic.java:45)
>   at kafka.admin.AdminUtils$.validateTopicConfig(AdminUtils.scala:578)
>   at kafka.admin.AdminUtils$.changeTopicConfig(AdminUtils.scala:595)
>   at kafka.admin.AdminUtilities$class.changeConfigs(AdminUtils.scala:52)
>   at kafka.admin.AdminUtils$.changeConfigs(AdminUtils.scala:63)
>   at kafka.admin.ConfigCommand$.alterConfig(ConfigCommand.scala:103)
>   at kafka.admin.ConfigCommand$.main(ConfigCommand.scala:70)
>   at kafka.admin.ConfigCommand.main(ConfigCommand.scala)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6050) Cannot alter default topic config

2017-10-11 Thread Tom Bentley (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200812#comment-16200812
 ] 

Tom Bentley commented on KAFKA-6050:


Ah, thanks [~mimaison], you're right! In that case I would suggest that trying 
to use {{}} as the {{--entity-name}} should consistently print a 
helpful error message about using {{--entity-default}}, rather than being 
accepted in one situation, and erroring in the other.

> Cannot alter default topic config
> -
>
> Key: KAFKA-6050
> URL: https://issues.apache.org/jira/browse/KAFKA-6050
> Project: Kafka
>  Issue Type: Bug
>Reporter: Tom Bentley
>
> The command to describe the default topic config
> {noformat}
> bin/kafka-configs.sh --zookeeper localhost:2181 \
>   --describe --entity-type topics --entity-name ''
> {noformat}
> returns without error, but the equivalent command to alter the default topic 
> config:
> {noformat}
> bin/kafka-configs.sh --zookeeper localhost:2181 --alter \
>   --entity-type topics --entity-name '' --add-config 
> retention.ms=1000
> {noformat}
> returns an error:
> {noformat}
> Error while executing config command Topic name "" is illegal, it 
> contains a character other than ASCII alphanumerics, '.', '_' and '-'
> org.apache.kafka.common.errors.InvalidTopicException: Topic name "" 
> is illegal, it contains a character other than ASCII alphanumerics, '.', '_' 
> and '-'
>   at org.apache.kafka.common.internals.Topic.validate(Topic.java:45)
>   at kafka.admin.AdminUtils$.validateTopicConfig(AdminUtils.scala:578)
>   at kafka.admin.AdminUtils$.changeTopicConfig(AdminUtils.scala:595)
>   at kafka.admin.AdminUtilities$class.changeConfigs(AdminUtils.scala:52)
>   at kafka.admin.AdminUtils$.changeConfigs(AdminUtils.scala:63)
>   at kafka.admin.ConfigCommand$.alterConfig(ConfigCommand.scala:103)
>   at kafka.admin.ConfigCommand$.main(ConfigCommand.scala:70)
>   at kafka.admin.ConfigCommand.main(ConfigCommand.scala)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6050) Cannot alter default topic config

2017-10-11 Thread Tom Bentley (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200035#comment-16200035
 ] 

Tom Bentley commented on KAFKA-6050:


[~huxi_2b] no worries, I didn't know about this magic topic name either, before 
discovering it in the code.

> Cannot alter default topic config
> -
>
> Key: KAFKA-6050
> URL: https://issues.apache.org/jira/browse/KAFKA-6050
> Project: Kafka
>  Issue Type: Bug
>Reporter: Tom Bentley
>
> The command to describe the default topic config
> {noformat}
> bin/kafka-configs.sh --zookeeper localhost:2181 \
>   --describe --entity-type topics --entity-name ''
> {noformat}
> returns without error, but the equivalent command to alter the default topic 
> config:
> {noformat}
> bin/kafka-configs.sh --zookeeper localhost:2181 --alter \
>   --entity-type topics --entity-name '' --add-config 
> retention.ms=1000
> {noformat}
> returns an error:
> {noformat}
> Error while executing config command Topic name "" is illegal, it 
> contains a character other than ASCII alphanumerics, '.', '_' and '-'
> org.apache.kafka.common.errors.InvalidTopicException: Topic name "" 
> is illegal, it contains a character other than ASCII alphanumerics, '.', '_' 
> and '-'
>   at org.apache.kafka.common.internals.Topic.validate(Topic.java:45)
>   at kafka.admin.AdminUtils$.validateTopicConfig(AdminUtils.scala:578)
>   at kafka.admin.AdminUtils$.changeTopicConfig(AdminUtils.scala:595)
>   at kafka.admin.AdminUtilities$class.changeConfigs(AdminUtils.scala:52)
>   at kafka.admin.AdminUtils$.changeConfigs(AdminUtils.scala:63)
>   at kafka.admin.ConfigCommand$.alterConfig(ConfigCommand.scala:103)
>   at kafka.admin.ConfigCommand$.main(ConfigCommand.scala:70)
>   at kafka.admin.ConfigCommand.main(ConfigCommand.scala)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6050) Cannot alter default topic config

2017-10-11 Thread Tom Bentley (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16200024#comment-16200024
 ] 

Tom Bentley commented on KAFKA-6050:


I'm not trying to create a topic with that name, I'm trying to change the 
default for a topic config. "" is used as a "placeholder" topic to 
store the default topic configurations (which explains why I'm able to 
{{--describe}} it). 

> Cannot alter default topic config
> -
>
> Key: KAFKA-6050
> URL: https://issues.apache.org/jira/browse/KAFKA-6050
> Project: Kafka
>  Issue Type: Bug
>Reporter: Tom Bentley
>
> The command to describe the default topic config
> {noformat}
> bin/kafka-configs.sh --zookeeper localhost:2181 \
>   --describe --entity-type topics --entity-name ''
> {noformat}
> returns without error, but the equivalent command to alter the default topic 
> config:
> {noformat}
> bin/kafka-configs.sh --zookeeper localhost:2181 --alter \
>   --entity-type topics --entity-name '' --add-config 
> retention.ms=1000
> {noformat}
> returns an error:
> {noformat}
> Error while executing config command Topic name "" is illegal, it 
> contains a character other than ASCII alphanumerics, '.', '_' and '-'
> org.apache.kafka.common.errors.InvalidTopicException: Topic name "" 
> is illegal, it contains a character other than ASCII alphanumerics, '.', '_' 
> and '-'
>   at org.apache.kafka.common.internals.Topic.validate(Topic.java:45)
>   at kafka.admin.AdminUtils$.validateTopicConfig(AdminUtils.scala:578)
>   at kafka.admin.AdminUtils$.changeTopicConfig(AdminUtils.scala:595)
>   at kafka.admin.AdminUtilities$class.changeConfigs(AdminUtils.scala:52)
>   at kafka.admin.AdminUtils$.changeConfigs(AdminUtils.scala:63)
>   at kafka.admin.ConfigCommand$.alterConfig(ConfigCommand.scala:103)
>   at kafka.admin.ConfigCommand$.main(ConfigCommand.scala:70)
>   at kafka.admin.ConfigCommand.main(ConfigCommand.scala)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6050) Cannot alter default topic config

2017-10-11 Thread Tom Bentley (JIRA)
Tom Bentley created KAFKA-6050:
--

 Summary: Cannot alter default topic config
 Key: KAFKA-6050
 URL: https://issues.apache.org/jira/browse/KAFKA-6050
 Project: Kafka
  Issue Type: Bug
Reporter: Tom Bentley


The command to describe the default topic config
{noformat}
bin/kafka-configs.sh --zookeeper localhost:2181 \
  --describe --entity-type topics --entity-name ''
{noformat}

returns without error, but the equivalent command to alter the default topic 
config:

{noformat}
bin/kafka-configs.sh --zookeeper localhost:2181 --alter \
  --entity-type topics --entity-name '' --add-config retention.ms=1000
{noformat}

returns an error:

{noformat}
Error while executing config command Topic name "" is illegal, it 
contains a character other than ASCII alphanumerics, '.', '_' and '-'
org.apache.kafka.common.errors.InvalidTopicException: Topic name "" is 
illegal, it contains a character other than ASCII alphanumerics, '.', '_' and 
'-'
at org.apache.kafka.common.internals.Topic.validate(Topic.java:45)
at kafka.admin.AdminUtils$.validateTopicConfig(AdminUtils.scala:578)
at kafka.admin.AdminUtils$.changeTopicConfig(AdminUtils.scala:595)
at kafka.admin.AdminUtilities$class.changeConfigs(AdminUtils.scala:52)
at kafka.admin.AdminUtils$.changeConfigs(AdminUtils.scala:63)
at kafka.admin.ConfigCommand$.alterConfig(ConfigCommand.scala:103)
at kafka.admin.ConfigCommand$.main(ConfigCommand.scala:70)
at kafka.admin.ConfigCommand.main(ConfigCommand.scala)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6046) DeleteRecordsRequest to a non-leader

2017-10-10 Thread Tom Bentley (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16198963#comment-16198963
 ] 

Tom Bentley commented on KAFKA-6046:


It _would be_ covered if execution ever got this far. In fact 
{{ReplicaManager.deleteRecordsOnLocalLog()}} has already thrown 
{{UnknownTopicOrPartitionException}}.

> DeleteRecordsRequest to a non-leader
> 
>
> Key: KAFKA-6046
> URL: https://issues.apache.org/jira/browse/KAFKA-6046
> Project: Kafka
>  Issue Type: Bug
>Reporter: Tom Bentley
> Fix For: 1.1.0
>
>
> When a `DeleteRecordsRequest` is sent to a broker that's not the leader for 
> the partition the  `DeleteRecordsResponse` returns 
> `UNKNOWN_TOPIC_OR_PARTITION`. This is ambiguous (does the topic not exist on 
> any broker, or did we just sent the request to the wrong broker?), and 
> inconsistent (a `ProduceRequest` would return `NOT_LEADER_FOR_PARTITION`).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6046) DeleteRecordsRequest to a non-leader

2017-10-10 Thread Tom Bentley (JIRA)
Tom Bentley created KAFKA-6046:
--

 Summary: DeleteRecordsRequest to a non-leader
 Key: KAFKA-6046
 URL: https://issues.apache.org/jira/browse/KAFKA-6046
 Project: Kafka
  Issue Type: Bug
Reporter: Tom Bentley
 Fix For: 1.1.0


When a `DeleteRecordsRequest` is sent to a broker that's not the leader for the 
partition the  `DeleteRecordsResponse` returns `UNKNOWN_TOPIC_OR_PARTITION`. 
This is ambiguous (does the topic not exist on any broker, or did we just sent 
the request to the wrong broker?), and inconsistent (a `ProduceRequest` would 
return `NOT_LEADER_FOR_PARTITION`).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (KAFKA-5693) TopicCreationPolicy and AlterConfigsPolicy overlap

2017-09-25 Thread Tom Bentley (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163291#comment-16163291
 ] 

Tom Bentley edited comment on KAFKA-5693 at 9/25/17 10:26 AM:
--

Some of KIP-179 subsequently got split into KIP-195 and applying the 
{{TopicCreationPolicy}} to the addition of partitions to a topic was mentioned 
in the KIP-195 [DISCUSS] thread. 

[~ijuma] said:

bq. About using the create topics policy, I'm not sure. Aside from the naming 
issue, there's also the problem that the policy doesn't know if a creation or 
update is taking place. This matters because one may not want to allow the 
number of partitions to be changed after creation as it affects the semantics 
if keys are used. One option is to introduce a new interface that can be used 
by create, alter and delete with a new config. And deprecate CreateTopicPolicy. 

Additionally KAFKA-5497/KIP-170 proposes to add a {{DeleteTopicPolicy}}. 

At this point the problem isn't so much that the TopicCreationPolicy and 
AlterConfigsPolicy overlap, it's that we're in danger of having a number of 
policies which overlap and are generally inconsistent.


was (Author: tombentley):
Some of KIP-179 subsequently got split into KIP-195 and applying the 
{{TopicCreationPolicy}} to the addition of partitions to a topic was mentioned 
in the KIP-195 [DISCUSS] thread. 

[~ijuma] said:

bq. About using the create topics policy, I'm not sure. Aside from the
naming issue, there's also the problem that the policy doesn't know if a
creation or update is taking place. This matters because one may not want
to allow the number of partitions to be changed after creation as it
affects the semantics if keys are used. One option is to introduce a new
interface that can be used by create, alter and delete with a new config.
And deprecate CreateTopicPolicy. 

Additionally KAFKA-5497/KIP-170 proposes to add a {{DeleteTopicPolicy}}. 

At this point the problem isn't so much that the TopicCreationPolicy and 
AlterConfigsPolicy overlap, it's that we're in danger of having a number of 
policies which overlap and are generally inconsistent.

> TopicCreationPolicy and AlterConfigsPolicy overlap
> --
>
> Key: KAFKA-5693
> URL: https://issues.apache.org/jira/browse/KAFKA-5693
> Project: Kafka
>  Issue Type: Bug
>Reporter: Tom Bentley
>Priority: Minor
>
> The administrator of a cluster can configure a {{CreateTopicPolicy}}, which 
> has access to the topic configs as well as other metadata to make its 
> decision about whether a topic creation is allowed. Thus in theory the 
> decision could be based on a combination of of the replication factor, and 
> the topic configs, for example. 
> Separately there is an AlterConfigPolicy, which only has access to the 
> configs (and can apply to configurable entities other than just topics).
> There are potential issues with this. For example although the 
> CreateTopicPolicy is checked at creation time, it's not checked for any later 
> alterations to the topic config. So policies which depend on both the topic 
> configs and other topic metadata could be worked around by changing the 
> configs after creation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5601) Refactor ReassignPartitionsCommand to use AdminClient

2017-09-25 Thread Tom Bentley (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley updated KAFKA-5601:
---
Fix Version/s: 1.1.0

> Refactor ReassignPartitionsCommand to use AdminClient
> -
>
> Key: KAFKA-5601
> URL: https://issues.apache.org/jira/browse/KAFKA-5601
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Tom Bentley
>Assignee: Tom Bentley
>  Labels: kip
> Fix For: 1.1.0
>
>
> Currently the {{ReassignPartitionsCommand}} (used by 
> {{kafka-reassign-partitions.sh}}) talks directly to ZooKeeper. It would be 
> better to have it use the AdminClient API instead. 
> This would entail creating two new protocol APIs, one to initiate the request 
> and another to request the status of an in-progress reassignment. As such 
> this would require a KIP.
> This touches on the work of KIP-166, but that proposes to use the 
> {{ReassignPartitionsCommand}} API, so should not be affected so long as that 
> API is maintained. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-3575) Use console consumer access topic that does not exist, can not use "Control + C" to exit process

2017-09-25 Thread Tom Bentley (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley updated KAFKA-3575:
---
Fix Version/s: 1.1.0

> Use console consumer access topic that does not exist, can not use "Control + 
> C" to exit process
> 
>
> Key: KAFKA-3575
> URL: https://issues.apache.org/jira/browse/KAFKA-3575
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
> Environment: SUSE Linux Enterprise Server 11 SP3
>Reporter: NieWang
>Assignee: Tom Bentley
>Priority: Minor
>  Labels: patch-available
> Fix For: 1.1.0
>
>
> 1.  use "sh kafka-console-consumer.sh --zookeeper 10.252.23.133:2181 --topic 
> topic_02"  start console consumer. topic_02 does not exist.
> 2. you can not use "Control + C" to exit console consumer process. The 
> process is blocked.
> 3. use jstack check process stack, as follows:
> linux:~ # jstack 122967
> 2016-04-18 15:46:06
> Full thread dump Java HotSpot(TM) 64-Bit Server VM (25.66-b17 mixed mode):
> "Attach Listener" #29 daemon prio=9 os_prio=0 tid=0x01781800 
> nid=0x1e0c8 waiting on condition [0x]
>java.lang.Thread.State: RUNNABLE
> "Thread-4" #27 prio=5 os_prio=0 tid=0x018a4000 nid=0x1e08a waiting on 
> condition [0x7ffbe5ac]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0xe00ed3b8> (a 
> java.util.concurrent.CountDownLatch$Sync)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
> at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
> at kafka.tools.ConsoleConsumer$$anon$1.run(ConsoleConsumer.scala:101)
> "SIGINT handler" #28 daemon prio=9 os_prio=0 tid=0x019d5800 
> nid=0x1e089 in Object.wait() [0x7ffbe5bc1000]
>java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.$$YJP$$wait(Native Method)
> at java.lang.Object.wait(Object.java)
> at java.lang.Thread.join(Thread.java:1245)
> - locked <0xe71fd4e8> (a kafka.tools.ConsoleConsumer$$anon$1)
> at java.lang.Thread.join(Thread.java:1319)
> at 
> java.lang.ApplicationShutdownHooks.runHooks(ApplicationShutdownHooks.java:106)
> at 
> java.lang.ApplicationShutdownHooks$1.run(ApplicationShutdownHooks.java:46)
> at java.lang.Shutdown.runHooks(Shutdown.java:123)
> at java.lang.Shutdown.sequence(Shutdown.java:167)
> at java.lang.Shutdown.exit(Shutdown.java:212)
> - locked <0xe00abfd8> (a java.lang.Class for 
> java.lang.Shutdown)
> at java.lang.Terminator$1.handle(Terminator.java:52)
> at sun.misc.Signal$1.run(Signal.java:212)
> at java.lang.Thread.run(Thread.java:745)
> "metrics-meter-tick-thread-2" #20 daemon prio=5 os_prio=0 
> tid=0x7ffbec77a800 nid=0x1e079 waiting on condition [0x7ffbe66c8000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0xe6fa6438> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1088)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
> at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> "metrics-meter-tick-thread-1" #19 daemon prio=5 os_prio=0 
> tid=0x7ffbec783000 nid=0x1e078 waiting on condition [0x7ffbe67c9000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0xe6fa6438> (a 
> 

[jira] [Updated] (KAFKA-5692) Refactor PreferredReplicaLeaderElectionCommand to use AdminClient

2017-09-25 Thread Tom Bentley (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley updated KAFKA-5692:
---
Fix Version/s: (was: 1.0.0)
   1.1.0

> Refactor PreferredReplicaLeaderElectionCommand to use AdminClient
> -
>
> Key: KAFKA-5692
> URL: https://issues.apache.org/jira/browse/KAFKA-5692
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Tom Bentley
>Assignee: Tom Bentley
>Priority: Minor
>  Labels: kip, patch-available
> Fix For: 1.1.0
>
>
> The PreferredReplicaLeaderElectionCommand currently uses a direct connection 
> to zookeeper. The zookeeper dependency should be deprecated and an 
> AdminClient API created to be used instead. 
> This change will require a KIP.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-4249) Document how to customize GC logging options for broker

2017-09-25 Thread Tom Bentley (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley updated KAFKA-4249:
---
Fix Version/s: 1.1.0

> Document how to customize GC logging options for broker
> ---
>
> Key: KAFKA-4249
> URL: https://issues.apache.org/jira/browse/KAFKA-4249
> Project: Kafka
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 0.10.0.1
>Reporter: Jim Hoagland
>Assignee: Tom Bentley
> Fix For: 1.1.0
>
>
> We wanted to enable GC logging for Kafka broker and saw that you can set 
> GC_LOG_ENABLED=true.  However, this didn't do what we wanted.  For example, 
> the GC log will be overwritten every time the broker gets restarted.  It 
> wasn't clear how we could do that (no documentation of it that I can find), 
> so I did some research by looking at the source code and did some testing and 
> found that KAFKA_GC_LOG_OPTS could be set with alternate JVM options prior to 
> starting broker.  I posted my solution to StackOverflow:
>   
> http://stackoverflow.com/questions/39854424/how-to-enable-gc-logging-for-apache-kafka-brokers-while-preventing-log-file-ove
> (feel free to critique)
> That solution is now public, but it seems like the Kafka documentation should 
> say how to do this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-4931) stop script fails due 4096 ps output limit

2017-09-25 Thread Tom Bentley (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley updated KAFKA-4931:
---
Fix Version/s: 1.1.0

> stop script fails due 4096 ps output limit
> --
>
> Key: KAFKA-4931
> URL: https://issues.apache.org/jira/browse/KAFKA-4931
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.10.2.0
>Reporter: Amit Jain
>Assignee: Tom Bentley
>Priority: Minor
>  Labels: patch-available
> Fix For: 1.1.0
>
>
> When run the script: bin/zookeeper-server-stop.sh fails to stop the zookeeper 
> server process if the ps output exceeds 4096 character limit of linux. I 
> think instead of ps we can use ${JAVA_HOME}/bin/jps -vl | grep QuorumPeerMain 
>  it would correctly stop zookeeper process. Currently we are using kill 
> PIDS=$(ps ax | grep java | grep -i QuorumPeerMain | grep -v grep | awk 
> '{print $1}')



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5517) Support linking to particular configuration parameters

2017-09-25 Thread Tom Bentley (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley updated KAFKA-5517:
---
Fix Version/s: 1.1.0

> Support linking to particular configuration parameters
> --
>
> Key: KAFKA-5517
> URL: https://issues.apache.org/jira/browse/KAFKA-5517
> Project: Kafka
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Tom Bentley
>Assignee: Tom Bentley
>Priority: Minor
>  Labels: patch-available
> Fix For: 1.1.0
>
>
> Currently the configuration parameters are documented long tables, and it's 
> only possible to link to the heading before a particular table. When 
> discussing configuration parameters on forums it would be helpful to be able 
> to link to the particular parameter under discussion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5877) Controller should only update reassignment znode if there is change in the reassignment data

2017-09-21 Thread Tom Bentley (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16175367#comment-16175367
 ] 

Tom Bentley commented on KAFKA-5877:


[~lindong] I was merely trying to point out (give that you had said you didn't 
fully understand the cause) that the proximate cause seemed to be with the set 
membership. In hindsight it just wasn't a very helpful comment. Sorry.

> Controller should only update reassignment znode if there is change in the 
> reassignment data
> 
>
> Key: KAFKA-5877
> URL: https://issues.apache.org/jira/browse/KAFKA-5877
> Project: Kafka
>  Issue Type: Bug
>Reporter: Dong Lin
>Assignee: Dong Lin
> Fix For: 1.0.0
>
>
> I encountered a scenario where controller keeps printing the following stack 
> trace repeatedly for a finite set of partitions. Although I have not fully 
> figured out the cause of this event, it seems that controller will update the 
> reassignment znode even if the new data is same as existing data. This patch 
> optimizes the controller behavior by only updating reassignment znode if it 
> needs to change the reassignment znode data.
> 2017/09/12 20:34:05.842 [KafkaController] [Controller 1376005]: Error 
> completing reassignment of partition [FederatorResultEvent,202]
> kafka.common.KafkaException: Partition [FederatorResultEvent,202] to be 
> reassigned is already assigned to replicas 1367001,1384010,1386010. Ignoring 
> request for partition reassignment
> at 
> kafka.controller.KafkaController.initiateReassignReplicasForTopicPartition(KafkaController.scala:608)
>  ~[kafka_2.10-0.11.0.9.jar:?]
> at 
> kafka.controller.KafkaController$PartitionReassignment$$anonfun$process$14.apply(KafkaController.scala:1327)
>  ~[kafka_2.10-0.11.0.9.jar:?]
> at 
> kafka.controller.KafkaController$PartitionReassignment$$anonfun$process$14.apply(KafkaController.scala:1320)
>  ~[kafka_2.10-0.11.0.9.jar:?]
> at 
> scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:224) 
> ~[scala-library-2.10.4.jar:?]
> at 
> scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:403) 
> ~[scala-library-2.10.4.jar:?]
> at 
> scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:403) 
> ~[scala-library-2.10.4.jar:?]
> at 
> kafka.controller.KafkaController$PartitionReassignment.process(KafkaController.scala:1320)
>  ~[kafka_2.10-0.11.0.9.jar:?]
> at 
> kafka.controller.ControllerEventManager$ControllerEventThread$$anonfun$doWork$1.apply$mcV$sp(ControllerEventManager.scala:53)
>  ~[kafka_2.10-0.11.0.9.jar:?]
> at 
> kafka.controller.ControllerEventManager$ControllerEventThread$$anonfun$doWork$1.apply(ControllerEventManager.scala:53)
>  ~[kafka_2.10-0.11.0.9.jar:?]
> at 
> kafka.controller.ControllerEventManager$ControllerEventThread$$anonfun$doWork$1.apply(ControllerEventManager.scala:53)
>  ~[kafka_2.10-0.11.0.9.jar:?]
> at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:31) 
> ~[kafka_2.10-0.11.0.9.jar:?]
> at 
> kafka.controller.ControllerEventManager$ControllerEventThread.doWork(ControllerEventManager.scala:52)
>  ~[kafka_2.10-0.11.0.9.jar:?]
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:64) 
> ~[kafka_2.10-0.11.0.9.jar:?]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5856) Add AdminClient.createPartitions()

2017-09-15 Thread Tom Bentley (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley updated KAFKA-5856:
---
Summary: Add AdminClient.createPartitions()  (was: AdminClient should be 
able to increase number of partitions)

> Add AdminClient.createPartitions()
> --
>
> Key: KAFKA-5856
> URL: https://issues.apache.org/jira/browse/KAFKA-5856
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Tom Bentley
>Assignee: Tom Bentley
>  Labels: kip
>
> It should be possible to increase the partition count using the AdminClient. 
> See 
> [KIP-195|https://cwiki.apache.org/confluence/display/KAFKA/KIP-195%3A+AdminClient.increasePartitions]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5692) Refactor PreferredReplicaLeaderElectionCommand to use AdminClient

2017-09-13 Thread Tom Bentley (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley updated KAFKA-5692:
---
Fix Version/s: 1.0.0

> Refactor PreferredReplicaLeaderElectionCommand to use AdminClient
> -
>
> Key: KAFKA-5692
> URL: https://issues.apache.org/jira/browse/KAFKA-5692
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Tom Bentley
>Assignee: Tom Bentley
>Priority: Minor
>  Labels: kip
> Fix For: 1.0.0
>
>
> The PreferredReplicaLeaderElectionCommand currently uses a direct connection 
> to zookeeper. The zookeeper dependency should be deprecated and an 
> AdminClient API created to be used instead. 
> This change will require a KIP.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5877) Controller should only update reassignment znode if there is change in the reassignment data

2017-09-12 Thread Tom Bentley (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5877?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163696#comment-16163696
 ] 

Tom Bentley commented on KAFKA-5877:



{quote}
Although I have not fully figured out the cause of this event
{quote}

Looking purely at the patch, if the patch fixes the problem you observe then 
logically the {{topicAndPartition}} being removed isn't a member of the 
{{partitionsBeingReassigned}} it's being removed from.

> Controller should only update reassignment znode if there is change in the 
> reassignment data
> 
>
> Key: KAFKA-5877
> URL: https://issues.apache.org/jira/browse/KAFKA-5877
> Project: Kafka
>  Issue Type: Bug
>Reporter: Dong Lin
>Assignee: Dong Lin
>
> I encountered a scenario where controller keeps printing the following stack 
> trace repeatedly for a finite set of partitions. Although I have not fully 
> figured out the cause of this event, it seems that controller will update the 
> reassignment znode even if the new data is same as existing data. This patch 
> optimizes the controller behavior by only updating reassignment znode if it 
> needs to change the reassignment znode data.
> 2017/09/12 20:34:05.842 [KafkaController] [Controller 1376005]: Error 
> completing reassignment of partition [FederatorResultEvent,202]
> kafka.common.KafkaException: Partition [FederatorResultEvent,202] to be 
> reassigned is already assigned to replicas 1367001,1384010,1386010. Ignoring 
> request for partition reassignment
> at 
> kafka.controller.KafkaController.initiateReassignReplicasForTopicPartition(KafkaController.scala:608)
>  ~[kafka_2.10-0.11.0.9.jar:?]
> at 
> kafka.controller.KafkaController$PartitionReassignment$$anonfun$process$14.apply(KafkaController.scala:1327)
>  ~[kafka_2.10-0.11.0.9.jar:?]
> at 
> kafka.controller.KafkaController$PartitionReassignment$$anonfun$process$14.apply(KafkaController.scala:1320)
>  ~[kafka_2.10-0.11.0.9.jar:?]
> at 
> scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:224) 
> ~[scala-library-2.10.4.jar:?]
> at 
> scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:403) 
> ~[scala-library-2.10.4.jar:?]
> at 
> scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:403) 
> ~[scala-library-2.10.4.jar:?]
> at 
> kafka.controller.KafkaController$PartitionReassignment.process(KafkaController.scala:1320)
>  ~[kafka_2.10-0.11.0.9.jar:?]
> at 
> kafka.controller.ControllerEventManager$ControllerEventThread$$anonfun$doWork$1.apply$mcV$sp(ControllerEventManager.scala:53)
>  ~[kafka_2.10-0.11.0.9.jar:?]
> at 
> kafka.controller.ControllerEventManager$ControllerEventThread$$anonfun$doWork$1.apply(ControllerEventManager.scala:53)
>  ~[kafka_2.10-0.11.0.9.jar:?]
> at 
> kafka.controller.ControllerEventManager$ControllerEventThread$$anonfun$doWork$1.apply(ControllerEventManager.scala:53)
>  ~[kafka_2.10-0.11.0.9.jar:?]
> at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:31) 
> ~[kafka_2.10-0.11.0.9.jar:?]
> at 
> kafka.controller.ControllerEventManager$ControllerEventThread.doWork(ControllerEventManager.scala:52)
>  ~[kafka_2.10-0.11.0.9.jar:?]
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:64) 
> ~[kafka_2.10-0.11.0.9.jar:?]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5693) TopicCreationPolicy and AlterConfigsPolicy overlap

2017-09-12 Thread Tom Bentley (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16163291#comment-16163291
 ] 

Tom Bentley commented on KAFKA-5693:


Some of KIP-179 subsequently got split into KIP-195 and applying the 
{{TopicCreationPolicy}} to the addition of partitions to a topic was mentioned 
in the KIP-195 [DISCUSS] thread. 

[~ijuma] said:

bq. About using the create topics policy, I'm not sure. Aside from the
naming issue, there's also the problem that the policy doesn't know if a
creation or update is taking place. This matters because one may not want
to allow the number of partitions to be changed after creation as it
affects the semantics if keys are used. One option is to introduce a new
interface that can be used by create, alter and delete with a new config.
And deprecate CreateTopicPolicy. 

Additionally KAFKA-5497/KIP-170 proposes to add a {{DeleteTopicPolicy}}. 

At this point the problem isn't so much that the TopicCreationPolicy and 
AlterConfigsPolicy overlap, it's that we're in danger of having a number of 
policies which overlap and are generally inconsistent.

> TopicCreationPolicy and AlterConfigsPolicy overlap
> --
>
> Key: KAFKA-5693
> URL: https://issues.apache.org/jira/browse/KAFKA-5693
> Project: Kafka
>  Issue Type: Bug
>Reporter: Tom Bentley
>Priority: Minor
>
> The administrator of a cluster can configure a {{CreateTopicPolicy}}, which 
> has access to the topic configs as well as other metadata to make its 
> decision about whether a topic creation is allowed. Thus in theory the 
> decision could be based on a combination of of the replication factor, and 
> the topic configs, for example. 
> Separately there is an AlterConfigPolicy, which only has access to the 
> configs (and can apply to configurable entities other than just topics).
> There are potential issues with this. For example although the 
> CreateTopicPolicy is checked at creation time, it's not checked for any later 
> alterations to the topic config. So policies which depend on both the topic 
> configs and other topic metadata could be worked around by changing the 
> configs after creation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5860) Prevent non-consecutive partition ids

2017-09-08 Thread Tom Bentley (JIRA)
Tom Bentley created KAFKA-5860:
--

 Summary: Prevent non-consecutive partition ids
 Key: KAFKA-5860
 URL: https://issues.apache.org/jira/browse/KAFKA-5860
 Project: Kafka
  Issue Type: Improvement
Reporter: Tom Bentley
Priority: Minor


It is possible to create non-consecutive partition ids via 
AdminClient.createTopics() and the kafka-topics.sh. It's not clear that this 
has any use cases, nor that it is well tested. 

Since people generally assume partition ids will be consecutive it is likely to 
be a cause of bugs in both Kafka and user code.

We should remove the ability to create topics with non-consecutive partition 
ids.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5856) AdminClient should be able to increase number of partitions

2017-09-07 Thread Tom Bentley (JIRA)
Tom Bentley created KAFKA-5856:
--

 Summary: AdminClient should be able to increase number of 
partitions
 Key: KAFKA-5856
 URL: https://issues.apache.org/jira/browse/KAFKA-5856
 Project: Kafka
  Issue Type: Improvement
Reporter: Tom Bentley
Assignee: Tom Bentley


It should be possible to increase the partition count using the AdminClient. 

See 
[KIP-195|https://cwiki.apache.org/confluence/display/KAFKA/KIP-195%3A+AdminClient.increasePartitions]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5692) Refactor PreferredReplicaLeaderElectionCommand to use AdminClient

2017-08-03 Thread Tom Bentley (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley updated KAFKA-5692:
---
Issue Type: Improvement  (was: Bug)

> Refactor PreferredReplicaLeaderElectionCommand to use AdminClient
> -
>
> Key: KAFKA-5692
> URL: https://issues.apache.org/jira/browse/KAFKA-5692
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Tom Bentley
>Assignee: Tom Bentley
>Priority: Minor
>  Labels: kip
>
> The PreferredReplicaLeaderElectionCommand currently uses a direct connection 
> to zookeeper. The zookeeper dependency should be deprecated and an 
> AdminClient API created to be used instead. 
> This change will require a KIP.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-3268) Refactor existing CLI scripts to use KafkaAdminClient

2017-08-03 Thread Tom Bentley (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16112603#comment-16112603
 ] 

Tom Bentley commented on KAFKA-3268:


[~viktorsomogyi] there is no tracking JIRA that I'm aware of. Rather than 
closing this one only to create a tracking one let's just use this JIRA.

You're right, both those other commands will also need to be changed, so 
creating JIRAs and KIPs for those is fine with me.



> Refactor existing CLI scripts to use KafkaAdminClient
> -
>
> Key: KAFKA-3268
> URL: https://issues.apache.org/jira/browse/KAFKA-3268
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Grant Henke
>Assignee: Viktor Somogyi
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5693) TopicCreationPolicy and AlterConfigsPolicy overlap

2017-08-03 Thread Tom Bentley (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16112403#comment-16112403
 ] 

Tom Bentley commented on KAFKA-5693:


Would this need a KIP? It's not changing a public API, as such, but from a 
user's point of view topic creations or topic config modifications would be 
rejected which were permitted before. 

Note that some aspects of this (but not all) are already included in 
[KIP-179|https://cwiki.apache.org/confluence/display/KAFKA/KIP-179+-+Change+ReassignPartitionsCommand+to+use+AdminClient]

> TopicCreationPolicy and AlterConfigsPolicy overlap
> --
>
> Key: KAFKA-5693
> URL: https://issues.apache.org/jira/browse/KAFKA-5693
> Project: Kafka
>  Issue Type: Bug
>Reporter: Tom Bentley
>Priority: Minor
>
> The administrator of a cluster can configure a {{CreateTopicPolicy}}, which 
> has access to the topic configs as well as other metadata to make its 
> decision about whether a topic creation is allowed. Thus in theory the 
> decision could be based on a combination of of the replication factor, and 
> the topic configs, for example. 
> Separately there is an AlterConfigPolicy, which only has access to the 
> configs (and can apply to configurable entities other than just topics).
> There are potential issues with this. For example although the 
> CreateTopicPolicy is checked at creation time, it's not checked for any later 
> alterations to the topic config. So policies which depend on both the topic 
> configs and other topic metadata could be worked around by changing the 
> configs after creation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5692) Refactor PreferredReplicaLeaderElectionCommand to use AdminClient

2017-08-03 Thread Tom Bentley (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley updated KAFKA-5692:
---
Labels: kip  (was: needs-kip)

> Refactor PreferredReplicaLeaderElectionCommand to use AdminClient
> -
>
> Key: KAFKA-5692
> URL: https://issues.apache.org/jira/browse/KAFKA-5692
> Project: Kafka
>  Issue Type: Bug
>Reporter: Tom Bentley
>Assignee: Tom Bentley
>Priority: Minor
>  Labels: kip
>
> The PreferredReplicaLeaderElectionCommand currently uses a direct connection 
> to zookeeper. The zookeeper dependency should be deprecated and an 
> AdminClient API created to be used instead. 
> This change will require a KIP.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5693) TopicCreationPolicy and AlterConfigsPolicy overlap

2017-08-02 Thread Tom Bentley (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16111313#comment-16111313
 ] 

Tom Bentley commented on KAFKA-5693:


One (obvious) part solution to this is that the {{CreateTopicPolicy}} should be 
applied not just at topic creation, but also for every topic modification, 
whether it's a change to the topic configs or something else. Unless there's a 
valid use case for configuring changed topics differently to freshly created 
topics?

By symmetry, unless it's OK for a noop topic config change to be rejected by 
the {{AlterConfigPolicy}}, maybe the topic config supplied during topic 
creation should also be run through the {{AlterConfigPolicy}}. Again, maybe 
there are valid use cases for allowing a topic config at creation which would 
be disallowed in a later modification, but I can't think of any.



> TopicCreationPolicy and AlterConfigsPolicy overlap
> --
>
> Key: KAFKA-5693
> URL: https://issues.apache.org/jira/browse/KAFKA-5693
> Project: Kafka
>  Issue Type: Bug
>Reporter: Tom Bentley
>Priority: Minor
>
> The administrator of a cluster can configure a {{CreateTopicPolicy}}, which 
> has access to the topic configs as well as other metadata to make its 
> decision about whether a topic creation is allowed. Thus in theory the 
> decision could be based on a combination of of the replication factor, and 
> the topic configs, for example. 
> Separately there is an AlterConfigPolicy, which only has access to the 
> configs (and can apply to configurable entities other than just topics).
> There are potential issues with this. For example although the 
> CreateTopicPolicy is checked at creation time, it's not checked for any later 
> alterations to the topic config. So policies which depend on both the topic 
> configs and other topic metadata could be worked around by changing the 
> configs after creation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5693) TopicCreationPolicy and AlterConfigsPolicy overlap

2017-08-02 Thread Tom Bentley (JIRA)
Tom Bentley created KAFKA-5693:
--

 Summary: TopicCreationPolicy and AlterConfigsPolicy overlap
 Key: KAFKA-5693
 URL: https://issues.apache.org/jira/browse/KAFKA-5693
 Project: Kafka
  Issue Type: Bug
Reporter: Tom Bentley
Priority: Minor


The administrator of a cluster can configure a {{CreateTopicPolicy}}, which has 
access to the topic configs as well as other metadata to make its decision 
about whether a topic creation is allowed. Thus in theory the decision could be 
based on a combination of of the replication factor, and the topic configs, for 
example. 

Separately there is an AlterConfigPolicy, which only has access to the configs 
(and can apply to configurable entities other than just topics).

There are potential issues with this. For example although the 
CreateTopicPolicy is checked at creation time, it's not checked for any later 
alterations to the topic config. So policies which depend on both the topic 
configs and other topic metadata could be worked around by changing the configs 
after creation.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5692) Refactor PreferredReplicaLeaderElectionCommand to use AdminClient

2017-08-02 Thread Tom Bentley (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley updated KAFKA-5692:
---
Description: 
The PreferredReplicaLeaderElectionCommand currently uses a direct connection to 
zookeeper. The zookeeper dependency should be deprecated and an AdminClient API 
created to be used instead. 

This change will require a KIP.

  was:
The PreferredReplicaLeaderElectionCommand currently uses a direction connection 
to zookeeper. The zookeeper dependency should be deprecated and an AdminClient 
API created to be used instead. 

This change will require a KIP.


> Refactor PreferredReplicaLeaderElectionCommand to use AdminClient
> -
>
> Key: KAFKA-5692
> URL: https://issues.apache.org/jira/browse/KAFKA-5692
> Project: Kafka
>  Issue Type: Bug
>Reporter: Tom Bentley
>Assignee: Tom Bentley
>Priority: Minor
>  Labels: needs-kip
>
> The PreferredReplicaLeaderElectionCommand currently uses a direct connection 
> to zookeeper. The zookeeper dependency should be deprecated and an 
> AdminClient API created to be used instead. 
> This change will require a KIP.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5692) Refactor PreferredReplicaLeaderElectionCommand to use AdminClient

2017-08-02 Thread Tom Bentley (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16111088#comment-16111088
 ] 

Tom Bentley commented on KAFKA-5692:


This is now covered by 
[KIP-183|https://cwiki.apache.org/confluence/display/KAFKA/KIP-183+-+Change+PreferredReplicaLeaderElectionCommand+to+use+AdminClient]

> Refactor PreferredReplicaLeaderElectionCommand to use AdminClient
> -
>
> Key: KAFKA-5692
> URL: https://issues.apache.org/jira/browse/KAFKA-5692
> Project: Kafka
>  Issue Type: Bug
>Reporter: Tom Bentley
>Assignee: Tom Bentley
>Priority: Minor
>  Labels: needs-kip
>
> The PreferredReplicaLeaderElectionCommand currently uses a direction 
> connection to zookeeper. The zookeeper dependency should be deprecated and an 
> AdminClient API created to be used instead. 
> This change will require a KIP.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5601) Refactor ReassignPartitionsCommand to use AdminClient

2017-07-19 Thread Tom Bentley (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16093250#comment-16093250
 ] 

Tom Bentley commented on KAFKA-5601:


[KIP-179|https://cwiki.apache.org/confluence/display/KAFKA/KIP-179+-+Change+ReassignPartitionsCommand+to+use+AdminClient]
 has been created for this issue.

> Refactor ReassignPartitionsCommand to use AdminClient
> -
>
> Key: KAFKA-5601
> URL: https://issues.apache.org/jira/browse/KAFKA-5601
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Tom Bentley
>Assignee: Tom Bentley
>  Labels: kip
>
> Currently the {{ReassignPartitionsCommand}} (used by 
> {{kafka-reassign-partitions.sh}}) talks directly to ZooKeeper. It would be 
> better to have it use the AdminClient API instead. 
> This would entail creating two new protocol APIs, one to initiate the request 
> and another to request the status of an in-progress reassignment. As such 
> this would require a KIP.
> This touches on the work of KIP-166, but that proposes to use the 
> {{ReassignPartitionsCommand}} API, so should not be affected so long as that 
> API is maintained. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5601) Refactor ReassignPartitionsCommand to use AdminClient

2017-07-19 Thread Tom Bentley (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley updated KAFKA-5601:
---
Labels: kip  (was: )

> Refactor ReassignPartitionsCommand to use AdminClient
> -
>
> Key: KAFKA-5601
> URL: https://issues.apache.org/jira/browse/KAFKA-5601
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Tom Bentley
>Assignee: Tom Bentley
>  Labels: kip
>
> Currently the {{ReassignPartitionsCommand}} (used by 
> {{kafka-reassign-partitions.sh}}) talks directly to ZooKeeper. It would be 
> better to have it use the AdminClient API instead. 
> This would entail creating two new protocol APIs, one to initiate the request 
> and another to request the status of an in-progress reassignment. As such 
> this would require a KIP.
> This touches on the work of KIP-166, but that proposes to use the 
> {{ReassignPartitionsCommand}} API, so should not be affected so long as that 
> API is maintained. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5601) Refactor ReassignPartitionsCommand to use AdminClient

2017-07-17 Thread Tom Bentley (JIRA)
Tom Bentley created KAFKA-5601:
--

 Summary: Refactor ReassignPartitionsCommand to use AdminClient
 Key: KAFKA-5601
 URL: https://issues.apache.org/jira/browse/KAFKA-5601
 Project: Kafka
  Issue Type: Improvement
Reporter: Tom Bentley
Assignee: Tom Bentley


Currently the {{ReassignPartitionsCommand}} (used by 
{{kafka-reassign-partitions.sh}}) talks directly to ZooKeeper. It would be 
better to have it use the AdminClient API instead. 

This would entail creating two new protocol APIs, one to initiate the request 
and another to request the status of an in-progress reassignment. As such this 
would require a KIP.

This touches on the work of KIP-166, but that proposes to use the 
{{ReassignPartitionsCommand}} API, so should not be affected so long as that 
API is maintained. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-3575) Use console consumer access topic that does not exist, can not use "Control + C" to exit process

2017-07-12 Thread Tom Bentley (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084112#comment-16084112
 ] 

Tom Bentley commented on KAFKA-3575:


Are any of the committers able to look at my PR for this, please?

> Use console consumer access topic that does not exist, can not use "Control + 
> C" to exit process
> 
>
> Key: KAFKA-3575
> URL: https://issues.apache.org/jira/browse/KAFKA-3575
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
> Environment: SUSE Linux Enterprise Server 11 SP3
>Reporter: NieWang
>Assignee: Tom Bentley
>Priority: Minor
>  Labels: patch-available
>
> 1.  use "sh kafka-console-consumer.sh --zookeeper 10.252.23.133:2181 --topic 
> topic_02"  start console consumer. topic_02 does not exist.
> 2. you can not use "Control + C" to exit console consumer process. The 
> process is blocked.
> 3. use jstack check process stack, as follows:
> linux:~ # jstack 122967
> 2016-04-18 15:46:06
> Full thread dump Java HotSpot(TM) 64-Bit Server VM (25.66-b17 mixed mode):
> "Attach Listener" #29 daemon prio=9 os_prio=0 tid=0x01781800 
> nid=0x1e0c8 waiting on condition [0x]
>java.lang.Thread.State: RUNNABLE
> "Thread-4" #27 prio=5 os_prio=0 tid=0x018a4000 nid=0x1e08a waiting on 
> condition [0x7ffbe5ac]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0xe00ed3b8> (a 
> java.util.concurrent.CountDownLatch$Sync)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
> at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
> at kafka.tools.ConsoleConsumer$$anon$1.run(ConsoleConsumer.scala:101)
> "SIGINT handler" #28 daemon prio=9 os_prio=0 tid=0x019d5800 
> nid=0x1e089 in Object.wait() [0x7ffbe5bc1000]
>java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.$$YJP$$wait(Native Method)
> at java.lang.Object.wait(Object.java)
> at java.lang.Thread.join(Thread.java:1245)
> - locked <0xe71fd4e8> (a kafka.tools.ConsoleConsumer$$anon$1)
> at java.lang.Thread.join(Thread.java:1319)
> at 
> java.lang.ApplicationShutdownHooks.runHooks(ApplicationShutdownHooks.java:106)
> at 
> java.lang.ApplicationShutdownHooks$1.run(ApplicationShutdownHooks.java:46)
> at java.lang.Shutdown.runHooks(Shutdown.java:123)
> at java.lang.Shutdown.sequence(Shutdown.java:167)
> at java.lang.Shutdown.exit(Shutdown.java:212)
> - locked <0xe00abfd8> (a java.lang.Class for 
> java.lang.Shutdown)
> at java.lang.Terminator$1.handle(Terminator.java:52)
> at sun.misc.Signal$1.run(Signal.java:212)
> at java.lang.Thread.run(Thread.java:745)
> "metrics-meter-tick-thread-2" #20 daemon prio=5 os_prio=0 
> tid=0x7ffbec77a800 nid=0x1e079 waiting on condition [0x7ffbe66c8000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0xe6fa6438> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1088)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
> at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> "metrics-meter-tick-thread-1" #19 daemon prio=5 os_prio=0 
> tid=0x7ffbec783000 nid=0x1e078 waiting on condition [0x7ffbe67c9000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0xe6fa6438> (a 
> 

[jira] [Commented] (KAFKA-4260) Improve documentation of configuration listeners=PLAINTEXT://0.0.0.0:9092

2017-07-12 Thread Tom Bentley (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16084113#comment-16084113
 ] 

Tom Bentley commented on KAFKA-4260:


Are any of the committers able to look at my PR for this, please?

> Improve documentation of configuration listeners=PLAINTEXT://0.0.0.0:9092
> -
>
> Key: KAFKA-4260
> URL: https://issues.apache.org/jira/browse/KAFKA-4260
> Project: Kafka
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 0.10.0.1
>Reporter: Michal Turek
>Assignee: Tom Bentley
>Priority: Minor
>  Labels: patch-available
>
> We have just updated our testing Kafka cluster to 0.10 and we were facing one 
> issue with migration of legacy 0.8 configuration to
> {noformat}
> listeners=PLAINTEXT://0.0.0.0:9092
> # advertised.listeners=PLAINTEXT://myPublicHostName:9092# REQUIRED for 
> 0.0.0.0:9092
> {noformat}
> This configuration will be invalid if {{advertised.listeners}} is not set 
> too. Connection string 0.0.0.0:9092 is stored to ZooKeeper according to 
> documentation of  {{advertised.listeners}} and observed behavior, but it 
> isn't obvious and difficult to analyze. Clients and even other brokers try to 
> communicate with brokers using destination address 0.0.0.0:9092, which is 
> impossible. Specification of {{advertised.listeners}} as shown above fixed 
> the issue.
> Please update documentation at 
> http://kafka.apache.org/0100/documentation#brokerconfigs and backport the 
> change to 0.9 and 0.10 branches.
> h4. advertised.listeners
> Listeners to publish to ZooKeeper for clients to use, if different than the 
> *`listeners`* -above-. In IaaS environments, this may need to be different 
> from the interface to which the broker binds. If this is not set, the value 
> for `listeners` will be used.
> h4. listeners
> Listener List - Comma-separated list of URIs we will listen on and their 
> protocols. Specify hostname as 0.0.0.0 to bind to all interfaces *(note 
> `advertised.listeners` configuration is required for 0.0.0.0)*. Leave 
> hostname empty to bind to default interface. Examples of legal listener 
> lists: PLAINTEXT://myhost:9092,TRACE://:9091 PLAINTEXT://0.0.0.0:9092, 
> TRACE://localhost:9093



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-4931) stop script fails due 4096 ps output limit

2017-07-11 Thread Tom Bentley (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16081855#comment-16081855
 ] 

Tom Bentley commented on KAFKA-4931:


The total size of your {{ps ax}} output will be for all processes that it 
lists. The issue is about the maximum length it can output for a single 
process. To show the command line of a particular process linux {{ps}} will 
read the {{/prod/$pid/cmdline}} virtual file. Apparently, 
{{/prod/$pid/cmdline}} is limited to a single page of memory 
(https://stackoverflow.com/questions/199130/how-do-i-increase-the-proc-pid-cmdline-4096-byte-limit).
 That page size would depend what options your kernel was compiled with. 
Clearly there is _a_ limit, as this bug has been reported three times. 

Of course, this is all for linux. I don't know how the limits might be 
different on other unixes. 

For completeness: Extra options to {{ps}} (e.g. {{ps axww}}) can be used to get 
longer output, but I believe that {{ps}} itself only shortens the output when 
it's outputting to a tty, which is not the case in the stop script, so such 
options won't help us.



> stop script fails due 4096 ps output limit
> --
>
> Key: KAFKA-4931
> URL: https://issues.apache.org/jira/browse/KAFKA-4931
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.10.2.0
>Reporter: Amit Jain
>Assignee: Tom Bentley
>Priority: Minor
>  Labels: patch-available
>
> When run the script: bin/zookeeper-server-stop.sh fails to stop the zookeeper 
> server process if the ps output exceeds 4096 character limit of linux. I 
> think instead of ps we can use ${JAVA_HOME}/bin/jps -vl | grep QuorumPeerMain 
>  it would correctly stop zookeeper process. Currently we are using kill 
> PIDS=$(ps ax | grep java | grep -i QuorumPeerMain | grep -v grep | awk 
> '{print $1}')



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-5255) Auto generate request/response classes

2017-07-05 Thread Tom Bentley (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley reassigned KAFKA-5255:
--

Assignee: Tom Bentley

> Auto generate request/response classes
> --
>
> Key: KAFKA-5255
> URL: https://issues.apache.org/jira/browse/KAFKA-5255
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Ismael Juma
>Assignee: Tom Bentley
> Fix For: 0.11.1.0
>
>
> We should automatically generate the request/response classes from the 
> protocol definition. This is a major source of boilerplate, development 
> effort and inconsistency at the moment. If we auto-generate the classes, we 
> may also be able to avoid the intermediate `Struct` representation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5554) Hilight config settings for particular common use cases

2017-07-04 Thread Tom Bentley (JIRA)
Tom Bentley created KAFKA-5554:
--

 Summary: Hilight config settings for particular common use cases
 Key: KAFKA-5554
 URL: https://issues.apache.org/jira/browse/KAFKA-5554
 Project: Kafka
  Issue Type: Improvement
  Components: documentation
Reporter: Tom Bentley
Assignee: Tom Bentley
Priority: Minor


Judging by the sorts of questions seen on the mailling list, stack overflow etc 
it seems common for users to assume that Kafka will default to settings which 
won't lose messages. They start using Kafka and at some later time find 
messages have been lost.

While it's not our fault if users don't read the documentation, there's a lot 
of configuration documentation to digest and it's easy for people to miss an 
important setting.

Therefore, I'd like to suggest that in addition to the current configuration 
docs we add a short section highlighting those settings which pertain to common 
use cases, such as:

* configs to avoid lost messages
* configs for low latency

I'm sure some users will continue to not read the documentation, but when they 
inevitably start asking questions it means people can respond with "have you 
configured everything as described here?"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5536) Tools splitted between Java and Scala implementation

2017-06-29 Thread Tom Bentley (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16067976#comment-16067976
 ] 

Tom Bentley commented on KAFKA-5536:


Isn't it simply that the Kafka core is all written in scala, and the other 
components are written in java? I think if the tools in core were in java and 
the rest of core were in scala that would be more inconsistent.

> Tools splitted between Java and Scala implementation
> 
>
> Key: KAFKA-5536
> URL: https://issues.apache.org/jira/browse/KAFKA-5536
> Project: Kafka
>  Issue Type: Wish
>Reporter: Paolo Patierno
>
> Hi,
> is there any specific reason why tools are splitted between Java and Scala 
> implementations ?
> Maybe it could be better having only one language for all of them.
> What do you think ?
> Thanks,
> Paolo



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5517) Support linking to particular configuration parameters

2017-06-26 Thread Tom Bentley (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley updated KAFKA-5517:
---
Labels: patch-available  (was: )

> Support linking to particular configuration parameters
> --
>
> Key: KAFKA-5517
> URL: https://issues.apache.org/jira/browse/KAFKA-5517
> Project: Kafka
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Tom Bentley
>Assignee: Tom Bentley
>Priority: Minor
>  Labels: patch-available
>
> Currently the configuration parameters are documented long tables, and it's 
> only possible to link to the heading before a particular table. When 
> discussing configuration parameters on forums it would be helpful to be able 
> to link to the particular parameter under discussion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5517) Support linking to particular configuration parameters

2017-06-26 Thread Tom Bentley (JIRA)
Tom Bentley created KAFKA-5517:
--

 Summary: Support linking to particular configuration parameters
 Key: KAFKA-5517
 URL: https://issues.apache.org/jira/browse/KAFKA-5517
 Project: Kafka
  Issue Type: Improvement
  Components: documentation
Reporter: Tom Bentley
Assignee: Tom Bentley
Priority: Minor


Currently the configuration parameters are documented long tables, and it's 
only possible to link to the heading before a particular table. When discussing 
configuration parameters on forums it would be helpful to be able to link to 
the particular parameter under discussion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5508) Documentation for altering topics

2017-06-23 Thread Tom Bentley (JIRA)
Tom Bentley created KAFKA-5508:
--

 Summary: Documentation for altering topics
 Key: KAFKA-5508
 URL: https://issues.apache.org/jira/browse/KAFKA-5508
 Project: Kafka
  Issue Type: Bug
  Components: documentation
Reporter: Tom Bentley
Priority: Minor


According to the upgrade documentation:

bq. Altering topic configuration from the kafka-topics.sh script 
(kafka.admin.TopicCommand) has been deprecated. Going forward, please use the 
kafka-configs.sh script (kafka.admin.ConfigCommand) for this functionality. 

But the Operations documentation still tells people to use kafka-topics.sh to 
alter their topic configurations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5460) Documentation on website uses word-breaks resulting in confusion

2017-06-23 Thread Tom Bentley (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley updated KAFKA-5460:
---
Attachment: Screenshot from 2017-06-23 14-45-02.png

Here's a screenshot of how the documentation could look using the dl approach I 
suggested. Personally I think it's a lot better than the table-based approach. 
WDYT?

> Documentation on website uses word-breaks resulting in confusion
> 
>
> Key: KAFKA-5460
> URL: https://issues.apache.org/jira/browse/KAFKA-5460
> Project: Kafka
>  Issue Type: Bug
>Reporter: Karel Vervaeke
> Attachments: Screen Shot 2017-06-16 at 14.45.40.png, Screenshot from 
> 2017-06-23 14-45-02.png
>
>
> Documentation seems to suggest there is a configuration property 
> auto.off-set.reset but it really is auto.offset.reset.
> We should look into disabling the word-break css properties (globally or at 
> least in the configuration reference tables)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5460) Documentation on website uses word-breaks resulting in confusion

2017-06-22 Thread Tom Bentley (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16059457#comment-16059457
 ] 

Tom Bentley commented on KAFKA-5460:


bq. I wouldn't recommend switching to  in this case because, there up to 6 
columns on the documentation page. dt/dd pairs are good for replacing tables 
with 2 columns.

I was thinking that the config property name would be the , and within the 
 things like the default, type, and valid values could be like key: value 
pairs, followed by the actual documentation. So it would look, very 
approximately, like this:

*auto.create.topics.enable*
  _importance:_ high
  _type:_ boolean
  _default:_ true
  Enable auto creation of topic on the server.

This would leave plenty of horizontal space for both the config property name 
and the documentation. It would also solve the problem of not being able to see 
the table headings when part way down a long table, since the key-value pairs 
are self describing.

> Documentation on website uses word-breaks resulting in confusion
> 
>
> Key: KAFKA-5460
> URL: https://issues.apache.org/jira/browse/KAFKA-5460
> Project: Kafka
>  Issue Type: Bug
>Reporter: Karel Vervaeke
> Attachments: Screen Shot 2017-06-16 at 14.45.40.png
>
>
> Documentation seems to suggest there is a configuration property 
> auto.off-set.reset but it really is auto.offset.reset.
> We should look into disabling the word-break css properties (globally or at 
> least in the configuration reference tables)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-3881) Remove the replacing logic from "." to "_" in Fetcher

2017-06-22 Thread Tom Bentley (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley updated KAFKA-3881:
---
Labels: newbie patch-available  (was: newbie)

> Remove the replacing logic from "." to "_" in Fetcher
> -
>
> Key: KAFKA-3881
> URL: https://issues.apache.org/jira/browse/KAFKA-3881
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, metrics
>Reporter: Guozhang Wang
>  Labels: newbie, patch-available
>
> The logic of replacing "." to "_" in metrics names / tags was originally 
> introduced in the core package's metrics since Graphite treats "." as 
> hierarchy separators (see KAFKA-1902); for the client metrics, it is supposed 
> that the GraphiteReported should take care of this itself rather than letting 
> Kafka metrics to special handle for it. In addition, right now only consumer 
> Fetcher had replace, but producer Sender does not have it actually.
> So we should consider removing this logic in the consumer Fetcher's metrics 
> package. NOTE that this is a public API backward incompatible change.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5479) Docs for authorization omit authorizer.class.name

2017-06-22 Thread Tom Bentley (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley updated KAFKA-5479:
---
Labels: patch-available  (was: )

> Docs for authorization omit authorizer.class.name
> -
>
> Key: KAFKA-5479
> URL: https://issues.apache.org/jira/browse/KAFKA-5479
> Project: Kafka
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Tom Bentley
>Assignee: Tom Bentley
>Priority: Minor
>  Labels: patch-available
>
> The documentation in ยง7.4 Authorization and ACLs doesn't mention the 
> {{authorizer.class.name}} setting. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-4260) Improve documentation of configuration listeners=PLAINTEXT://0.0.0.0:9092

2017-06-22 Thread Tom Bentley (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley updated KAFKA-4260:
---
Labels: patch-available  (was: )

> Improve documentation of configuration listeners=PLAINTEXT://0.0.0.0:9092
> -
>
> Key: KAFKA-4260
> URL: https://issues.apache.org/jira/browse/KAFKA-4260
> Project: Kafka
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 0.10.0.1
>Reporter: Michal Turek
>Assignee: Tom Bentley
>Priority: Minor
>  Labels: patch-available
>
> We have just updated our testing Kafka cluster to 0.10 and we were facing one 
> issue with migration of legacy 0.8 configuration to
> {noformat}
> listeners=PLAINTEXT://0.0.0.0:9092
> # advertised.listeners=PLAINTEXT://myPublicHostName:9092# REQUIRED for 
> 0.0.0.0:9092
> {noformat}
> This configuration will be invalid if {{advertised.listeners}} is not set 
> too. Connection string 0.0.0.0:9092 is stored to ZooKeeper according to 
> documentation of  {{advertised.listeners}} and observed behavior, but it 
> isn't obvious and difficult to analyze. Clients and even other brokers try to 
> communicate with brokers using destination address 0.0.0.0:9092, which is 
> impossible. Specification of {{advertised.listeners}} as shown above fixed 
> the issue.
> Please update documentation at 
> http://kafka.apache.org/0100/documentation#brokerconfigs and backport the 
> change to 0.9 and 0.10 branches.
> h4. advertised.listeners
> Listeners to publish to ZooKeeper for clients to use, if different than the 
> *`listeners`* -above-. In IaaS environments, this may need to be different 
> from the interface to which the broker binds. If this is not set, the value 
> for `listeners` will be used.
> h4. listeners
> Listener List - Comma-separated list of URIs we will listen on and their 
> protocols. Specify hostname as 0.0.0.0 to bind to all interfaces *(note 
> `advertised.listeners` configuration is required for 0.0.0.0)*. Leave 
> hostname empty to bind to default interface. Examples of legal listener 
> lists: PLAINTEXT://myhost:9092,TRACE://:9091 PLAINTEXT://0.0.0.0:9092, 
> TRACE://localhost:9093



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-4260) Improve documentation of configuration listeners=PLAINTEXT://0.0.0.0:9092

2017-06-22 Thread Tom Bentley (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley reassigned KAFKA-4260:
--

Assignee: Tom Bentley

> Improve documentation of configuration listeners=PLAINTEXT://0.0.0.0:9092
> -
>
> Key: KAFKA-4260
> URL: https://issues.apache.org/jira/browse/KAFKA-4260
> Project: Kafka
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 0.10.0.1
>Reporter: Michal Turek
>Assignee: Tom Bentley
>Priority: Minor
>  Labels: patch-available
>
> We have just updated our testing Kafka cluster to 0.10 and we were facing one 
> issue with migration of legacy 0.8 configuration to
> {noformat}
> listeners=PLAINTEXT://0.0.0.0:9092
> # advertised.listeners=PLAINTEXT://myPublicHostName:9092# REQUIRED for 
> 0.0.0.0:9092
> {noformat}
> This configuration will be invalid if {{advertised.listeners}} is not set 
> too. Connection string 0.0.0.0:9092 is stored to ZooKeeper according to 
> documentation of  {{advertised.listeners}} and observed behavior, but it 
> isn't obvious and difficult to analyze. Clients and even other brokers try to 
> communicate with brokers using destination address 0.0.0.0:9092, which is 
> impossible. Specification of {{advertised.listeners}} as shown above fixed 
> the issue.
> Please update documentation at 
> http://kafka.apache.org/0100/documentation#brokerconfigs and backport the 
> change to 0.9 and 0.10 branches.
> h4. advertised.listeners
> Listeners to publish to ZooKeeper for clients to use, if different than the 
> *`listeners`* -above-. In IaaS environments, this may need to be different 
> from the interface to which the broker binds. If this is not set, the value 
> for `listeners` will be used.
> h4. listeners
> Listener List - Comma-separated list of URIs we will listen on and their 
> protocols. Specify hostname as 0.0.0.0 to bind to all interfaces *(note 
> `advertised.listeners` configuration is required for 0.0.0.0)*. Leave 
> hostname empty to bind to default interface. Examples of legal listener 
> lists: PLAINTEXT://myhost:9092,TRACE://:9091 PLAINTEXT://0.0.0.0:9092, 
> TRACE://localhost:9093



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-4059) Documentation still refers to AsyncProducer and SyncProducer

2017-06-22 Thread Tom Bentley (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley reassigned KAFKA-4059:
--

Assignee: Tom Bentley

> Documentation still refers to AsyncProducer and SyncProducer
> 
>
> Key: KAFKA-4059
> URL: https://issues.apache.org/jira/browse/KAFKA-4059
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.10.0.1
>Reporter: Andrew B
>Assignee: Tom Bentley
>  Labels: patch-available
>
> The 0.10 docs are still referring to AsyncProducer and SyncProducer.
> See: https://github.com/apache/kafka/search?utf8=%E2%9C%93=AsyncProducer



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-3575) Use console consumer access topic that does not exist, can not use "Control + C" to exit process

2017-06-21 Thread Tom Bentley (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley reassigned KAFKA-3575:
--

Assignee: Tom Bentley

> Use console consumer access topic that does not exist, can not use "Control + 
> C" to exit process
> 
>
> Key: KAFKA-3575
> URL: https://issues.apache.org/jira/browse/KAFKA-3575
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
> Environment: SUSE Linux Enterprise Server 11 SP3
>Reporter: NieWang
>Assignee: Tom Bentley
>Priority: Minor
>  Labels: patch-available
>
> 1.  use "sh kafka-console-consumer.sh --zookeeper 10.252.23.133:2181 --topic 
> topic_02"  start console consumer. topic_02 does not exist.
> 2. you can not use "Control + C" to exit console consumer process. The 
> process is blocked.
> 3. use jstack check process stack, as follows:
> linux:~ # jstack 122967
> 2016-04-18 15:46:06
> Full thread dump Java HotSpot(TM) 64-Bit Server VM (25.66-b17 mixed mode):
> "Attach Listener" #29 daemon prio=9 os_prio=0 tid=0x01781800 
> nid=0x1e0c8 waiting on condition [0x]
>java.lang.Thread.State: RUNNABLE
> "Thread-4" #27 prio=5 os_prio=0 tid=0x018a4000 nid=0x1e08a waiting on 
> condition [0x7ffbe5ac]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0xe00ed3b8> (a 
> java.util.concurrent.CountDownLatch$Sync)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
> at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
> at kafka.tools.ConsoleConsumer$$anon$1.run(ConsoleConsumer.scala:101)
> "SIGINT handler" #28 daemon prio=9 os_prio=0 tid=0x019d5800 
> nid=0x1e089 in Object.wait() [0x7ffbe5bc1000]
>java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.$$YJP$$wait(Native Method)
> at java.lang.Object.wait(Object.java)
> at java.lang.Thread.join(Thread.java:1245)
> - locked <0xe71fd4e8> (a kafka.tools.ConsoleConsumer$$anon$1)
> at java.lang.Thread.join(Thread.java:1319)
> at 
> java.lang.ApplicationShutdownHooks.runHooks(ApplicationShutdownHooks.java:106)
> at 
> java.lang.ApplicationShutdownHooks$1.run(ApplicationShutdownHooks.java:46)
> at java.lang.Shutdown.runHooks(Shutdown.java:123)
> at java.lang.Shutdown.sequence(Shutdown.java:167)
> at java.lang.Shutdown.exit(Shutdown.java:212)
> - locked <0xe00abfd8> (a java.lang.Class for 
> java.lang.Shutdown)
> at java.lang.Terminator$1.handle(Terminator.java:52)
> at sun.misc.Signal$1.run(Signal.java:212)
> at java.lang.Thread.run(Thread.java:745)
> "metrics-meter-tick-thread-2" #20 daemon prio=5 os_prio=0 
> tid=0x7ffbec77a800 nid=0x1e079 waiting on condition [0x7ffbe66c8000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0xe6fa6438> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1088)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
> at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> "metrics-meter-tick-thread-1" #19 daemon prio=5 os_prio=0 
> tid=0x7ffbec783000 nid=0x1e078 waiting on condition [0x7ffbe67c9000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0xe6fa6438> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
>

[jira] [Updated] (KAFKA-3575) Use console consumer access topic that does not exist, can not use "Control + C" to exit process

2017-06-21 Thread Tom Bentley (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley updated KAFKA-3575:
---
Labels: patch-available  (was: )

> Use console consumer access topic that does not exist, can not use "Control + 
> C" to exit process
> 
>
> Key: KAFKA-3575
> URL: https://issues.apache.org/jira/browse/KAFKA-3575
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
> Environment: SUSE Linux Enterprise Server 11 SP3
>Reporter: NieWang
>Assignee: Tom Bentley
>Priority: Minor
>  Labels: patch-available
>
> 1.  use "sh kafka-console-consumer.sh --zookeeper 10.252.23.133:2181 --topic 
> topic_02"  start console consumer. topic_02 does not exist.
> 2. you can not use "Control + C" to exit console consumer process. The 
> process is blocked.
> 3. use jstack check process stack, as follows:
> linux:~ # jstack 122967
> 2016-04-18 15:46:06
> Full thread dump Java HotSpot(TM) 64-Bit Server VM (25.66-b17 mixed mode):
> "Attach Listener" #29 daemon prio=9 os_prio=0 tid=0x01781800 
> nid=0x1e0c8 waiting on condition [0x]
>java.lang.Thread.State: RUNNABLE
> "Thread-4" #27 prio=5 os_prio=0 tid=0x018a4000 nid=0x1e08a waiting on 
> condition [0x7ffbe5ac]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0xe00ed3b8> (a 
> java.util.concurrent.CountDownLatch$Sync)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
> at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
> at kafka.tools.ConsoleConsumer$$anon$1.run(ConsoleConsumer.scala:101)
> "SIGINT handler" #28 daemon prio=9 os_prio=0 tid=0x019d5800 
> nid=0x1e089 in Object.wait() [0x7ffbe5bc1000]
>java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.$$YJP$$wait(Native Method)
> at java.lang.Object.wait(Object.java)
> at java.lang.Thread.join(Thread.java:1245)
> - locked <0xe71fd4e8> (a kafka.tools.ConsoleConsumer$$anon$1)
> at java.lang.Thread.join(Thread.java:1319)
> at 
> java.lang.ApplicationShutdownHooks.runHooks(ApplicationShutdownHooks.java:106)
> at 
> java.lang.ApplicationShutdownHooks$1.run(ApplicationShutdownHooks.java:46)
> at java.lang.Shutdown.runHooks(Shutdown.java:123)
> at java.lang.Shutdown.sequence(Shutdown.java:167)
> at java.lang.Shutdown.exit(Shutdown.java:212)
> - locked <0xe00abfd8> (a java.lang.Class for 
> java.lang.Shutdown)
> at java.lang.Terminator$1.handle(Terminator.java:52)
> at sun.misc.Signal$1.run(Signal.java:212)
> at java.lang.Thread.run(Thread.java:745)
> "metrics-meter-tick-thread-2" #20 daemon prio=5 os_prio=0 
> tid=0x7ffbec77a800 nid=0x1e079 waiting on condition [0x7ffbe66c8000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0xe6fa6438> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1088)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
> at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> "metrics-meter-tick-thread-1" #19 daemon prio=5 os_prio=0 
> tid=0x7ffbec783000 nid=0x1e078 waiting on condition [0x7ffbe67c9000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0xe6fa6438> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
>

[jira] [Updated] (KAFKA-4931) stop script fails due 4096 ps output limit

2017-06-21 Thread Tom Bentley (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley updated KAFKA-4931:
---
Labels: patch-available  (was: )

> stop script fails due 4096 ps output limit
> --
>
> Key: KAFKA-4931
> URL: https://issues.apache.org/jira/browse/KAFKA-4931
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.10.2.0
>Reporter: Amit Jain
>Priority: Minor
>  Labels: patch-available
>
> When run the script: bin/zookeeper-server-stop.sh fails to stop the zookeeper 
> server process if the ps output exceeds 4096 character limit of linux. I 
> think instead of ps we can use ${JAVA_HOME}/bin/jps -vl | grep QuorumPeerMain 
>  it would correctly stop zookeeper process. Currently we are using kill 
> PIDS=$(ps ax | grep java | grep -i QuorumPeerMain | grep -v grep | awk 
> '{print $1}')



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5298) MirrorMaker deadlocks with missing topics

2017-06-21 Thread Tom Bentley (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16057437#comment-16057437
 ] 

Tom Bentley commented on KAFKA-5298:


The deadlock aspect has already been reported in KAFKA-3575

> MirrorMaker deadlocks with missing topics
> -
>
> Key: KAFKA-5298
> URL: https://issues.apache.org/jira/browse/KAFKA-5298
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, tools
>Affects Versions: 0.10.2.1
>Reporter: Raymond Conn
>
> When mirrorMaker mirrors a topic to destination brokers that have topic auto 
> create disabled and a topic doesn't exist on the destination brokers, the 
> producer in mirror maker logs the following 
> {code}
> Error while fetching metadata with correlation id 467 : 
> \{mirror-test2=UNKNOWN_TOPIC_OR_PARTITION\}
> Error while fetching metadata with correlation id 468 : 
> {mirror-test2=UNKNOWN_TOPIC_OR_PARTITION}
> {code}
> This log message is fine and expected. The problem is the log message stops 
> ~5 min later. At which point the logs look fine, but mirror maker is not 
> mirroring any of its topics. 
> What's worse is mirrorMaker is basically in an unrecoverable state once this 
> happens (the log statement stops). If you create the topic at the destination 
> mirrorMaker still won't mirror data until a restart. Attempts to restart 
> mirrorMaker (cleanly) fail because the process is more or less deadlocked in 
> its shutdown hook.
> Here is the reasoning:
> * MirrorMaker becomes unrecoverable after 5 minutes because of this loop in 
> the 
> [producer|https://github.com/apache/kafka/blob/e06cd3e55f25a0bb414e0770493906ea8019420a/clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java#L543-L561]
> * The producer will keep waiting for metadata for the missing topic or until 
> the max timeout is reached. (max long in this case) 
> * after 5 minutes the producer stops making a metadata request for the topic 
> because that topic expires 
> [here|https://github.com/apache/kafka/blob/e06cd3e55f25a0bb414e0770493906ea8019420a/clients/src/main/java/org/apache/kafka/clients/Metadata.java#L218]
>  
> * topic is never re-added for metadata requests since the only add is before 
> entering the loop 
> [here|(https://github.com/apache/kafka/blob/e06cd3e55f25a0bb414e0770493906ea8019420a/clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java#L528]
> So basically after 5 minutes all metadata requests moving forward are for no 
> topics since the topic expired. The mirrorMaker thread essentially gets stuck 
> waiting forever since there will never be a metadata request for the topic 
> the thread is waiting on
> All of this basically leads to a deadlock state in the shutdown hook. 
> * shutdown hook sends a shutdown to the mirrorMaker threads 
> * waits for threads to exit their loop by waitind on a 
> [latch|https://github.com/apache/kafka/blob/0.10.2/core/src/main/scala/kafka/tools/MirrorMaker.scala#L396]
> * latch is never counted down in 
> [produce|https://github.com/apache/kafka/blob/0.10.2/core/src/main/scala/kafka/tools/MirrorMaker.scala#L434]
>  
> * thread will never exit the loop to countdown the latch on line 462.
> This can be seen with a thread dump of the shutdown hook thread
> {code}
> Name: MirrorMakerShutdownHook
> State: WAITING on java.util.concurrent.CountDownLatch$Sync@3ffebeac
> Total blocked: 0  Total waited: 1
> Stack trace: 
> sun.misc.Unsafe.park(Native Method)
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
> java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
> kafka.tools.MirrorMaker$MirrorMakerThread.awaitShutdown(MirrorMaker.scala:498)
> kafka.tools.MirrorMaker$$anonfun$cleanShutdown$4.apply(MirrorMaker.scala:396)
> kafka.tools.MirrorMaker$$anonfun$cleanShutdown$4.apply(MirrorMaker.scala:396)
> scala.collection.Iterator$class.foreach(Iterator.scala:893)
> {code}
> The root of the issue more or less related to the issue documented here where 
> the producer can block waiting for metadata. 
> https://issues.apache.org/jira/browse/KAFKA-3450



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-2967) Move Kafka documentation to ReStructuredText

2017-06-19 Thread Tom Bentley (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16054289#comment-16054289
 ] 

Tom Bentley commented on KAFKA-2967:


Any progress on this [[~ceposta], [[~gwenshap]? I would like to help improve 
the documentation, and not having to edit the raw HTML would make that a nicer 
experience.

> Move Kafka documentation to ReStructuredText
> 
>
> Key: KAFKA-2967
> URL: https://issues.apache.org/jira/browse/KAFKA-2967
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Gwen Shapira
>
> Storing documentation as HTML is kind of BS :)
> * Formatting is a pain, and making it look good is even worse
> * Its just HTML, can't generate PDFs
> * Reading and editting is painful
> * Validating changes is hard because our formatting relies on all kinds of 
> Apache Server features.
> I suggest:
> * Move to RST
> * Generate HTML and PDF during build using Sphinx plugin for Gradle.
> Lots of Apache projects are doing this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5459) Support kafka-console-producer.sh messages as whole file

2017-06-16 Thread Tom Bentley (JIRA)
Tom Bentley created KAFKA-5459:
--

 Summary: Support kafka-console-producer.sh messages as whole file
 Key: KAFKA-5459
 URL: https://issues.apache.org/jira/browse/KAFKA-5459
 Project: Kafka
  Issue Type: Improvement
  Components: tools
Affects Versions: 0.10.2.1
Reporter: Tom Bentley
Priority: Trivial


{{kafka-console-producer.sh}} treats each line read as a separate message. This 
can be controlled using the {{--line-reader}} option and the corresponding 
{{MessageReader}} trait. It would be useful to have built-in support for 
sending the whole input stream/file as the message. 





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


<    1   2   3