[jira] [Updated] (KAFKA-6024) Consider moving validation in KafkaConsumer ahead of call to acquireAndEnsureOpen()

2018-01-24 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated KAFKA-6024:
--
Description: 
In several methods, parameter validation is done after calling 
acquireAndEnsureOpen() :
{code}
public void seek(TopicPartition partition, long offset) {
acquireAndEnsureOpen();
try {
if (offset < 0)
throw new IllegalArgumentException("seek offset must not be a 
negative number");
{code}
Since the value of parameter would not change per invocation, it seems 
performing validation ahead of acquireAndEnsureOpen() call would be better.

  was:
In several methods, parameter validation is done after calling 
acquireAndEnsureOpen() :
{code}
public void seek(TopicPartition partition, long offset) {
acquireAndEnsureOpen();
try {
if (offset < 0)
throw new IllegalArgumentException("seek offset must not be a 
negative number");
{code}

Since the value of parameter would not change per invocation, it seems 
performing validation ahead of acquireAndEnsureOpen() call would be better.


> Consider moving validation in KafkaConsumer ahead of call to 
> acquireAndEnsureOpen()
> ---
>
> Key: KAFKA-6024
> URL: https://issues.apache.org/jira/browse/KAFKA-6024
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: siva santhalingam
>Priority: Minor
>
> In several methods, parameter validation is done after calling 
> acquireAndEnsureOpen() :
> {code}
> public void seek(TopicPartition partition, long offset) {
> acquireAndEnsureOpen();
> try {
> if (offset < 0)
> throw new IllegalArgumentException("seek offset must not be a 
> negative number");
> {code}
> Since the value of parameter would not change per invocation, it seems 
> performing validation ahead of acquireAndEnsureOpen() call would be better.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6335) SimpleAclAuthorizerTest#testHighConcurrencyModificationOfResourceAcls fails intermittently

2018-01-24 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338615#comment-16338615
 ] 

Ted Yu commented on KAFKA-6335:
---

Haven't seen this for a while.

> SimpleAclAuthorizerTest#testHighConcurrencyModificationOfResourceAcls fails 
> intermittently
> --
>
> Key: KAFKA-6335
> URL: https://issues.apache.org/jira/browse/KAFKA-6335
> Project: Kafka
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Manikumar
>Priority: Major
> Fix For: 1.1.0
>
>
> From 
> https://builds.apache.org/job/kafka-pr-jdk9-scala2.12/3045/testReport/junit/kafka.security.auth/SimpleAclAuthorizerTest/testHighConcurrencyModificationOfResourceAcls/
>  :
> {code}
> java.lang.AssertionError: expected acls Set(User:36 has Allow permission for 
> operations: Read from hosts: *, User:7 has Allow permission for operations: 
> Read from hosts: *, User:21 has Allow permission for operations: Read from 
> hosts: *, User:39 has Allow permission for operations: Read from hosts: *, 
> User:43 has Allow permission for operations: Read from hosts: *, User:3 has 
> Allow permission for operations: Read from hosts: *, User:35 has Allow 
> permission for operations: Read from hosts: *, User:15 has Allow permission 
> for operations: Read from hosts: *, User:16 has Allow permission for 
> operations: Read from hosts: *, User:22 has Allow permission for operations: 
> Read from hosts: *, User:26 has Allow permission for operations: Read from 
> hosts: *, User:11 has Allow permission for operations: Read from hosts: *, 
> User:38 has Allow permission for operations: Read from hosts: *, User:8 has 
> Allow permission for operations: Read from hosts: *, User:28 has Allow 
> permission for operations: Read from hosts: *, User:32 has Allow permission 
> for operations: Read from hosts: *, User:25 has Allow permission for 
> operations: Read from hosts: *, User:41 has Allow permission for operations: 
> Read from hosts: *, User:44 has Allow permission for operations: Read from 
> hosts: *, User:48 has Allow permission for operations: Read from hosts: *, 
> User:2 has Allow permission for operations: Read from hosts: *, User:9 has 
> Allow permission for operations: Read from hosts: *, User:14 has Allow 
> permission for operations: Read from hosts: *, User:46 has Allow permission 
> for operations: Read from hosts: *, User:13 has Allow permission for 
> operations: Read from hosts: *, User:5 has Allow permission for operations: 
> Read from hosts: *, User:29 has Allow permission for operations: Read from 
> hosts: *, User:45 has Allow permission for operations: Read from hosts: *, 
> User:6 has Allow permission for operations: Read from hosts: *, User:37 has 
> Allow permission for operations: Read from hosts: *, User:23 has Allow 
> permission for operations: Read from hosts: *, User:19 has Allow permission 
> for operations: Read from hosts: *, User:24 has Allow permission for 
> operations: Read from hosts: *, User:17 has Allow permission for operations: 
> Read from hosts: *, User:34 has Allow permission for operations: Read from 
> hosts: *, User:12 has Allow permission for operations: Read from hosts: *, 
> User:42 has Allow permission for operations: Read from hosts: *, User:4 has 
> Allow permission for operations: Read from hosts: *, User:47 has Allow 
> permission for operations: Read from hosts: *, User:18 has Allow permission 
> for operations: Read from hosts: *, User:31 has Allow permission for 
> operations: Read from hosts: *, User:49 has Allow permission for operations: 
> Read from hosts: *, User:33 has Allow permission for operations: Read from 
> hosts: *, User:1 has Allow permission for operations: Read from hosts: *, 
> User:27 has Allow permission for operations: Read from hosts: *) but got 
> Set(User:36 has Allow permission for operations: Read from hosts: *, User:7 
> has Allow permission for operations: Read from hosts: *, User:21 has Allow 
> permission for operations: Read from hosts: *, User:39 has Allow permission 
> for operations: Read from hosts: *, User:43 has Allow permission for 
> operations: Read from hosts: *, User:3 has Allow permission for operations: 
> Read from hosts: *, User:35 has Allow permission for operations: Read from 
> hosts: *, User:15 has Allow permission for operations: Read from hosts: *, 
> User:16 has Allow permission for operations: Read from hosts: *, User:22 has 
> Allow permission for operations: Read from hosts: *, User:26 has Allow 
> permission for operations: Read from hosts: *, User:11 has Allow permission 
> for operations: Read from hosts: *, User:38 has Allow permission for 
> operations: Read from hosts: *, User:8 has Allow permission for operations: 
> Read from hosts: *, User:28 has Allow permission for 

[jira] [Updated] (KAFKA-6472) WordCount example code error

2018-01-24 Thread JONYhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JONYhao updated KAFKA-6472:
---
Docs Text: 
There is a  left bracket "(" missed in the WordCount example tutorial

https://kafka.apache.org/10/documentation/streams/tutorial

at the end of the page ,line 31

31   .to("streams-wordcount-output", Produced.with(Serdes.String(), 
Serdes.Long());
should be 

31 .to("streams-wordcount-output", Produced.with(Serdes.String(), 
Serdes.Long()));

  was:
This is a "(" missed in the WordCount example tutorial

https://kafka.apache.org/10/documentation/streams/tutorial

at the end of the page ,line 31

31   .to("streams-wordcount-output", Produced.with(Serdes.String(), 
Serdes.Long());
should be 

31 .to("streams-wordcount-output", Produced.with(Serdes.String(), 
Serdes.Long()));


> WordCount example code error
> 
>
> Key: KAFKA-6472
> URL: https://issues.apache.org/jira/browse/KAFKA-6472
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation, streams
>Affects Versions: 0.11.0.2
>Reporter: JONYhao
>Assignee: Joel Hamill
>Priority: Trivial
>  Labels: newbie
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> This is a "(" missed in the WordCount example tutorial
> [https://kafka.apache.org/10/documentation/streams/tutorial]
> at the end of the page ,line 31
> 31   {{.to(}}{{"streams-wordcount-output"}}{{, Produced.with(Serdes.String(), 
> Serdes.Long());}}
> {{should be }}
> {{31 }}{{.to(}}{{"streams-wordcount-output"}}{{, 
> Produced.with(Serdes.String(), Serdes.Long()));}}{{}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6472) WordCount example code error

2018-01-24 Thread JONYhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JONYhao updated KAFKA-6472:
---
Docs Text: 
There is a  left bracket "(" missed in the WordCount example tutorial

https://kafka.apache.org/10/documentation/streams/tutorial

at the end of the page ,line 31

31   .to("streams-wordcount-output", Produced.with(Serdes.String(), 
Serdes.Long());

should be 

31 .to("streams-wordcount-output", Produced.with(Serdes.String(), 
Serdes.Long()));

  was:
There is a  left bracket "(" missed in the WordCount example tutorial

https://kafka.apache.org/10/documentation/streams/tutorial

at the end of the page ,line 31

31   .to("streams-wordcount-output", Produced.with(Serdes.String(), 
Serdes.Long());
should be 

31 .to("streams-wordcount-output", Produced.with(Serdes.String(), 
Serdes.Long()));


> WordCount example code error
> 
>
> Key: KAFKA-6472
> URL: https://issues.apache.org/jira/browse/KAFKA-6472
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation, streams
>Affects Versions: 0.11.0.2
>Reporter: JONYhao
>Assignee: Joel Hamill
>Priority: Trivial
>  Labels: newbie
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> This is a "(" missed in the WordCount example tutorial
> [https://kafka.apache.org/10/documentation/streams/tutorial]
> at the end of the page ,line 31
> 31   {{.to(}}{{"streams-wordcount-output"}}{{, Produced.with(Serdes.String(), 
> Serdes.Long());}}
> {{should be }}
> {{31 }}{{.to(}}{{"streams-wordcount-output"}}{{, 
> Produced.with(Serdes.String(), Serdes.Long()));}}{{}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-6481) Improving performance of the function ControllerChannelManager.addUpdateMetadataRequestForBrokers

2018-01-24 Thread Lucas Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lucas Wang reassigned KAFKA-6481:
-

Assignee: Lucas Wang

> Improving performance of the function 
> ControllerChannelManager.addUpdateMetadataRequestForBrokers
> -
>
> Key: KAFKA-6481
> URL: https://issues.apache.org/jira/browse/KAFKA-6481
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Lucas Wang
>Assignee: Lucas Wang
>Priority: Minor
>
> The function ControllerChannelManager.addUpdateMetadataRequestForBrokers 
> should only process the partitions specified in the partitions parameter, 
> i.e. the 2nd parameter, and avoid iterating through the set of partitions in 
> TopicDeletionManager.partitionsToBeDeleted.
>  
> Here is why the current code can be a problem:
> The number of partitions-to-be-deleted stored in the field 
> TopicDeletionManager.partitionsToBeDeleted can become quite large under 
> certain scenarios. For instance, if a topic a0 has dead replicas, the topic 
> a0 would be marked as ineligible for deletion, and its partitions will be 
> retained in the field TopicDeletionManager.partitionsToBeDeleted for future 
> retries.
> With a large set of partitions in TopicDeletionManager.partitionsToBeDeleted, 
> if some replicas in another topic a1 needs to be transitioned to 
> OfflineReplica state, possibly because of a broker going offline, a call 
> stack listed as following will happen on the controller, causing a iteration 
> of the whole partitions-to-be-deleted set for every single affected partition.
>     controller.topicDeletionManager.partitionsToBeDeleted.foreach(partition 
> => updateMetadataRequestPartitionInfo(partition, beingDeleted = true))
>     ControllerBrokerRequestBatch.addUpdateMetadataRequestForBrokers
>     ControllerBrokerRequestBatch.addLeaderAndIsrRequestForBrokers
>     inside a for-loop for each partition 
> ReplicaStateMachine.doHandleStateChanges
> ReplicaStateMachine.handleStateChanges
> KafkaController.onReplicasBecomeOffline
> KafkaController.onBrokerFailure
> How to reproduce the problem:
> 1. Cretae a cluster with 2 brokers having id 1 and 2
> 2. Create a topic having 10 partitions and deliberately assign the replicas 
> to non-existing brokers, i.e. 
>  ./bin/kafka-topics.sh --zookeeper localhost:2181 --create --topic a0 
> --replica-assignment `echo -n 3:4; for i in \`seq 9\`; do echo -n ,3:4; done`
> 3. Delete the topic and cause all of its partitions to be retained in the 
> field TopicDeletionManager.partitionsToBeDeleted, since the topic has dead 
> replicas, and is ineligible for deletion.
> 4. Create another topic a1 also having 10 partitions, i.e.
>  ./bin/kafka-topics.sh --zookeeper localhost:2181 --create --topic a1 
> --replica-assignment `echo -n 1:2; for i in \`seq 9\`; do echo -n ,1:2; done`
> 5. Kill the broker 2 and cause the replicas on broker 2 to be transitioned to 
> OfflineReplica state on the controller.
> 6. Verify that the following log message appear over 200 times in the 
> controller.log file, one for each iteration of the a0 partitions
>  "Leader not yet assigned for partition [a0,..]. Skip sending 
> UpdateMetadataRequest."
>  
>  What happened was 
>  1. During controlled shutdown, the function 
> KafkaController.doControlledShutdown calls 
> replicaStateMachine.handleStateChanges to transition all the replicas on 
> broker 2 to OfflineState. That in turn generates 100 (10 x 10) entries of the 
> logs above.
>  2. When the broker zNode is gone in ZK, the function 
> KafkaController.onBrokerFailure calls KafkaController.onReplicasBecomeOffline 
> to transition all the replicas on broker 2 to OfflineState. And this again 
> generates 100 (10 x 10) entries of the logs above.
> After applying the patch in this RB, I've verified that by going through the 
> steps above, broker 2 going offline NO LONGER generates log entries for the 
> a0 partitions.
> Also I've verified that topic deletion for topic a1 still works fine.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6481) Improving performance of the function ControllerChannelManager.addUpdateMetadataRequestForBrokers

2018-01-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338465#comment-16338465
 ] 

ASF GitHub Bot commented on KAFKA-6481:
---

gitlw opened a new pull request #4472: KAFKA-6481: Improving performance of the 
function ControllerChannelManager.addUpd…
URL: https://github.com/apache/kafka/pull/4472
 
 
   …ateMetadataRequestForBrokers
   
   *More detailed description of your change,
   if necessary. The PR title and PR message become
   the squashed commit message, so use a separate
   comment to ping reviewers.*
   
   *Summary of testing strategy (including rationale)
   for the feature or bug fix. Unit and/or integration
   tests are expected for any behaviour change and
   system tests should be considered for larger changes.*
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Improving performance of the function 
> ControllerChannelManager.addUpdateMetadataRequestForBrokers
> -
>
> Key: KAFKA-6481
> URL: https://issues.apache.org/jira/browse/KAFKA-6481
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Lucas Wang
>Priority: Minor
>
> The function ControllerChannelManager.addUpdateMetadataRequestForBrokers 
> should only process the partitions specified in the partitions parameter, 
> i.e. the 2nd parameter, and avoid iterating through the set of partitions in 
> TopicDeletionManager.partitionsToBeDeleted.
>  
> Here is why the current code can be a problem:
> The number of partitions-to-be-deleted stored in the field 
> TopicDeletionManager.partitionsToBeDeleted can become quite large under 
> certain scenarios. For instance, if a topic a0 has dead replicas, the topic 
> a0 would be marked as ineligible for deletion, and its partitions will be 
> retained in the field TopicDeletionManager.partitionsToBeDeleted for future 
> retries.
> With a large set of partitions in TopicDeletionManager.partitionsToBeDeleted, 
> if some replicas in another topic a1 needs to be transitioned to 
> OfflineReplica state, possibly because of a broker going offline, a call 
> stack listed as following will happen on the controller, causing a iteration 
> of the whole partitions-to-be-deleted set for every single affected partition.
>     controller.topicDeletionManager.partitionsToBeDeleted.foreach(partition 
> => updateMetadataRequestPartitionInfo(partition, beingDeleted = true))
>     ControllerBrokerRequestBatch.addUpdateMetadataRequestForBrokers
>     ControllerBrokerRequestBatch.addLeaderAndIsrRequestForBrokers
>     inside a for-loop for each partition 
> ReplicaStateMachine.doHandleStateChanges
> ReplicaStateMachine.handleStateChanges
> KafkaController.onReplicasBecomeOffline
> KafkaController.onBrokerFailure
> How to reproduce the problem:
> 1. Cretae a cluster with 2 brokers having id 1 and 2
> 2. Create a topic having 10 partitions and deliberately assign the replicas 
> to non-existing brokers, i.e. 
>  ./bin/kafka-topics.sh --zookeeper localhost:2181 --create --topic a0 
> --replica-assignment `echo -n 3:4; for i in \`seq 9\`; do echo -n ,3:4; done`
> 3. Delete the topic and cause all of its partitions to be retained in the 
> field TopicDeletionManager.partitionsToBeDeleted, since the topic has dead 
> replicas, and is ineligible for deletion.
> 4. Create another topic a1 also having 10 partitions, i.e.
>  ./bin/kafka-topics.sh --zookeeper localhost:2181 --create --topic a1 
> --replica-assignment `echo -n 1:2; for i in \`seq 9\`; do echo -n ,1:2; done`
> 5. Kill the broker 2 and cause the replicas on broker 2 to be transitioned to 
> OfflineReplica state on the controller.
> 6. Verify that the following log message appear over 200 times in the 
> controller.log file, one for each iteration of the a0 partitions
>  "Leader not yet assigned for partition [a0,..]. Skip sending 
> UpdateMetadataRequest."
>  
>  What happened was 
>  1. During controlled shutdown, the function 
> KafkaController.doControlledShutdown calls 
> replicaStateMachine.handleStateChanges to transition all the replicas on 
> broker 2 to OfflineState. That in turn generates 100 (10 x 10) entries of the 
> logs above.
>  2. When the broker zNode is gone in ZK, the function 
> KafkaController.onBrokerFailure calls KafkaController.onReplicasBecomeOffline 
> to transition all the 

[jira] [Created] (KAFKA-6481) Improving performance of the function ControllerChannelManager.addUpdateMetadataRequestForBrokers

2018-01-24 Thread Lucas Wang (JIRA)
Lucas Wang created KAFKA-6481:
-

 Summary: Improving performance of the function 
ControllerChannelManager.addUpdateMetadataRequestForBrokers
 Key: KAFKA-6481
 URL: https://issues.apache.org/jira/browse/KAFKA-6481
 Project: Kafka
  Issue Type: Improvement
Reporter: Lucas Wang


The function ControllerChannelManager.addUpdateMetadataRequestForBrokers should 
only process the partitions specified in the partitions parameter, i.e. the 2nd 
parameter, and avoid iterating through the set of partitions in 
TopicDeletionManager.partitionsToBeDeleted.

 

Here is why the current code can be a problem:

The number of partitions-to-be-deleted stored in the field 
TopicDeletionManager.partitionsToBeDeleted can become quite large under certain 
scenarios. For instance, if a topic a0 has dead replicas, the topic a0 would be 
marked as ineligible for deletion, and its partitions will be retained in the 
field TopicDeletionManager.partitionsToBeDeleted for future retries.
With a large set of partitions in TopicDeletionManager.partitionsToBeDeleted, 
if some replicas in another topic a1 needs to be transitioned to OfflineReplica 
state, possibly because of a broker going offline, a call stack listed as 
following will happen on the controller, causing a iteration of the whole 
partitions-to-be-deleted set for every single affected partition.

    controller.topicDeletionManager.partitionsToBeDeleted.foreach(partition => 
updateMetadataRequestPartitionInfo(partition, beingDeleted = true))
    ControllerBrokerRequestBatch.addUpdateMetadataRequestForBrokers
    ControllerBrokerRequestBatch.addLeaderAndIsrRequestForBrokers
    inside a for-loop for each partition 
ReplicaStateMachine.doHandleStateChanges
ReplicaStateMachine.handleStateChanges
KafkaController.onReplicasBecomeOffline
KafkaController.onBrokerFailure


How to reproduce the problem:
1. Cretae a cluster with 2 brokers having id 1 and 2
2. Create a topic having 10 partitions and deliberately assign the replicas to 
non-existing brokers, i.e. 
 ./bin/kafka-topics.sh --zookeeper localhost:2181 --create --topic a0 
--replica-assignment `echo -n 3:4; for i in \`seq 9\`; do echo -n ,3:4; done`

3. Delete the topic and cause all of its partitions to be retained in the field 
TopicDeletionManager.partitionsToBeDeleted, since the topic has dead replicas, 
and is ineligible for deletion.
4. Create another topic a1 also having 10 partitions, i.e.
 ./bin/kafka-topics.sh --zookeeper localhost:2181 --create --topic a1 
--replica-assignment `echo -n 1:2; for i in \`seq 9\`; do echo -n ,1:2; done`
5. Kill the broker 2 and cause the replicas on broker 2 to be transitioned to 
OfflineReplica state on the controller.
6. Verify that the following log message appear over 200 times in the 
controller.log file, one for each iteration of the a0 partitions
 "Leader not yet assigned for partition [a0,..]. Skip sending 
UpdateMetadataRequest."
 
 What happened was 
 1. During controlled shutdown, the function 
KafkaController.doControlledShutdown calls 
replicaStateMachine.handleStateChanges to transition all the replicas on broker 
2 to OfflineState. That in turn generates 100 (10 x 10) entries of the logs 
above.
 2. When the broker zNode is gone in ZK, the function 
KafkaController.onBrokerFailure calls KafkaController.onReplicasBecomeOffline 
to transition all the replicas on broker 2 to OfflineState. And this again 
generates 100 (10 x 10) entries of the logs above.

After applying the patch in this RB, I've verified that by going through the 
steps above, broker 2 going offline NO LONGER generates log entries for the a0 
partitions.
Also I've verified that topic deletion for topic a1 still works fine.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-4985) kafka-acls should resolve dns names and accept ip ranges

2018-01-24 Thread Colin P. McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338376#comment-16338376
 ] 

Colin P. McCabe commented on KAFKA-4985:


bq. That argument could be applied to practically any use of DNS, so I'm not 
convinced it makes a good reason not to do this.

DNS is simply not secure.  So it shouldn't be used to provide security.

For example, I could spin up a DNS server on your local network, grant myself 
some hostname, and then do whatever I want on your broker, if hostname-based 
security is in use.

Or perhaps I take control of some public DNS server, and use it to publish fake 
DNS entries.  Your site relies on many third party DNS resolvers that aren't 
controlled by your organization, any time you want to access something not in 
your local network.

DNS-based security would just be a square wheel, even if it worked reliably 
(which it wouldn't, any time a DNS record changed)...

> kafka-acls should resolve dns names and accept ip ranges
> 
>
> Key: KAFKA-4985
> URL: https://issues.apache.org/jira/browse/KAFKA-4985
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Reporter: Ryan P
>Priority: Major
>
> Per KAFKA-2869 it looks like a conscious decision was made to move away from 
> using hostnames for authorization purposes. 
> This is fine however IP addresses are terrible inconvenient compared to 
> hostname with regard to configuring ACLs. 
> I'd like to propose the following two improvements to make managing these 
> ACLs easier for end-users. 
> 1. Allow for simple patterns to be matched 
> i.e --allow-host 10.17.81.11[1-9] 
> 2. Allow for hostnames to be used even if they are resolved on the client 
> side. Simple pattern matching on hostnames would be a welcome addition as well
> i.e. --allow-host host.name.com
> Accepting a comma delimited list of hostnames and ip addresses would also be 
> helpful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6462) ResetIntegrationTest unstable

2018-01-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338248#comment-16338248
 ] 

ASF GitHub Bot commented on KAFKA-6462:
---

guozhangwang closed pull request #4446: KAFKA-6462: fix unstable 
ResetIntegrationTest
URL: https://github.com/apache/kafka/pull/4446
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/streams/src/test/java/org/apache/kafka/streams/integration/AbstractResetIntegrationTest.java
 
b/streams/src/test/java/org/apache/kafka/streams/integration/AbstractResetIntegrationTest.java
index 5819b6d3183..15d03329c28 100644
--- 
a/streams/src/test/java/org/apache/kafka/streams/integration/AbstractResetIntegrationTest.java
+++ 
b/streams/src/test/java/org/apache/kafka/streams/integration/AbstractResetIntegrationTest.java
@@ -24,6 +24,7 @@
 import org.apache.kafka.clients.producer.ProducerConfig;
 import org.apache.kafka.common.config.SslConfigs;
 import org.apache.kafka.common.config.types.Password;
+import org.apache.kafka.common.errors.TimeoutException;
 import org.apache.kafka.common.serialization.LongDeserializer;
 import org.apache.kafka.common.serialization.LongSerializer;
 import org.apache.kafka.common.serialization.Serdes;
@@ -167,11 +168,24 @@ public boolean conditionMet() {
 private final static Properties RESULT_CONSUMER_CONFIG = new Properties();
 
 void prepareTest() throws Exception {
-cluster.deleteAndRecreateTopics(INPUT_TOPIC, OUTPUT_TOPIC, 
OUTPUT_TOPIC_2, OUTPUT_TOPIC_2_RERUN);
-
 prepareConfigs();
 prepareEnvironment();
 
+// busy wait until cluster (ie, ConsumerGroupCoordinator) is available
+while (true) {
+Thread.sleep(50);
+
+try {
+TestUtils.waitForCondition(consumerGroupInactiveCondition, 
TIMEOUT_MULTIPLIER * CLEANUP_CONSUMER_TIMEOUT,
+"Test consumer group active even after waiting " + 
(TIMEOUT_MULTIPLIER * CLEANUP_CONSUMER_TIMEOUT) + " ms.");
+} catch (final TimeoutException e) {
+continue;
+}
+break;
+}
+
+cluster.deleteAndRecreateTopics(INPUT_TOPIC, OUTPUT_TOPIC, 
OUTPUT_TOPIC_2, OUTPUT_TOPIC_2_RERUN);
+
 add10InputElements();
 }
 


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> ResetIntegrationTest unstable
> -
>
> Key: KAFKA-6462
> URL: https://issues.apache.org/jira/browse/KAFKA-6462
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams, unit tests
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>Priority: Minor
>
> {noformat}
> java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.TimeoutException: Failed to update metadata 
> after 6 ms.
> at 
> org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.(KafkaProducer.java:1155)
> at 
> org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:847)
> at 
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:784)
> at 
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:671)
> at 
> org.apache.kafka.streams.integration.utils.IntegrationTestUtils.produceKeyValuesSynchronouslyWithTimestamp(IntegrationTestUtils.java:133)
> at 
> org.apache.kafka.streams.integration.utils.IntegrationTestUtils.produceKeyValuesSynchronouslyWithTimestamp(IntegrationTestUtils.java:118)
> at 
> org.apache.kafka.streams.integration.AbstractResetIntegrationTest.add10InputElements(AbstractResetIntegrationTest.java:199)
> at 
> org.apache.kafka.streams.integration.AbstractResetIntegrationTest.prepareTest(AbstractResetIntegrationTest.java:175)
> at 
> org.apache.kafka.streams.integration.ResetIntegrationTest.before(ResetIntegrationTest.java:56){noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6480) Add config to enforce max fetch size on the broker

2018-01-24 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-6480:
--

 Summary: Add config to enforce max fetch size on the broker
 Key: KAFKA-6480
 URL: https://issues.apache.org/jira/browse/KAFKA-6480
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Gustafson


Users are increasingly hitting memory problems due to message format 
down-conversion. The problem is basically that we have to do the 
down-conversion in memory. Since the default fetch size is 50Mb, it doesn't 
take that many fetch requests to cause an OOM. One mitigation is KAFKA-6352. It 
would also be helpful if the broker had a configuration to restrict the maximum 
allowed fetch size across all consumers. This would also prevent a malicious 
client from using this in order to DoS the server.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6242) Enable resizing various broker thread pools

2018-01-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16338019#comment-16338019
 ] 

ASF GitHub Bot commented on KAFKA-6242:
---

rajinisivaram opened a new pull request #4471: KAFKA-6242: Dynamic resize of 
various broker thread pools
URL: https://github.com/apache/kafka/pull/4471
 
 
   Dynamic resize of broker thread pools as described in KIP-226:
- num.network.threads
- num.io.threads
- num.replica.fetchers
- num.recovery.threads.per.data.dir
- background.threads
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Enable resizing various broker thread pools
> ---
>
> Key: KAFKA-6242
> URL: https://issues.apache.org/jira/browse/KAFKA-6242
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Major
> Fix For: 1.1.0
>
>
> See 
> [KIP-226|https://cwiki.apache.org/confluence/display/KAFKA/KIP-226+-+Dynamic+Broker+Configuration]
>  for details



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6418) AdminClient should handle empty or null topic names better

2018-01-24 Thread Colin P. McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin P. McCabe updated KAFKA-6418:
---
Priority: Minor  (was: Major)

> AdminClient should handle empty or null topic names better
> --
>
> Key: KAFKA-6418
> URL: https://issues.apache.org/jira/browse/KAFKA-6418
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: dan norwood
>Priority: Minor
>
> if you try to `createTopics(Collections.singleton(new NewTopic()));` you will 
> get something like the following:
> {noformat}
> [2018-01-02 11:20:46,481] ERROR [kafka-admin-client-thread | adminclient-3] 
> Uncaught exception in thread 'kafka-admin-client-thread | adminclient-3': 
> (org.apache.kafka.common.utils.KafkaThread)
> org.apache.kafka.common.protocol.types.SchemaException: Error computing size 
> for field 'create_topic_requests': Error computing size for field 'topic': 
> Missing value for field 'topic' which has no default value.
>   at org.apache.kafka.common.protocol.types.Schema.sizeOf(Schema.java:94)
>   at org.apache.kafka.common.protocol.types.Struct.sizeOf(Struct.java:341)
>   at 
> org.apache.kafka.common.requests.AbstractRequestResponse.serialize(AbstractRequestResponse.java:28)
>   at 
> org.apache.kafka.common.requests.AbstractRequest.serialize(AbstractRequest.java:98)
>   at 
> org.apache.kafka.common.requests.AbstractRequest.toSend(AbstractRequest.java:91)
>   at org.apache.kafka.clients.NetworkClient.doSend(NetworkClient.java:423)
>   at org.apache.kafka.clients.NetworkClient.doSend(NetworkClient.java:397)
>   at org.apache.kafka.clients.NetworkClient.send(NetworkClient.java:358)
>   at 
> org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.sendEligibleCalls(KafkaAdminClient.java:810)
>   at 
> org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1002)
>   at java.lang.Thread.run(Thread.java:745)
> [2018-01-02 11:20:46,481] ERROR [kafka-admin-client-thread | adminclient-3] 
> Uncaught exception in thread 'kafka-admin-client-thread | adminclient-3': 
> (org.apache.kafka.common.utils.KafkaThread)
> org.apache.kafka.common.protocol.types.SchemaException: Error computing size 
> for field 'create_topic_requests': Error computing size for field 'topic': 
> Missing value for field 'topic' which has no default value.
>   at org.apache.kafka.common.protocol.types.Schema.sizeOf(Schema.java:94)
>   at org.apache.kafka.common.protocol.types.Struct.sizeOf(Struct.java:341)
>   at 
> org.apache.kafka.common.requests.AbstractRequestResponse.serialize(AbstractRequestResponse.java:28)
>   at 
> org.apache.kafka.common.requests.AbstractRequest.serialize(AbstractRequest.java:98)
>   at 
> org.apache.kafka.common.requests.AbstractRequest.toSend(AbstractRequest.java:91)
>   at org.apache.kafka.clients.NetworkClient.doSend(NetworkClient.java:423)
>   at org.apache.kafka.clients.NetworkClient.doSend(NetworkClient.java:397)
>   at org.apache.kafka.clients.NetworkClient.send(NetworkClient.java:358)
>   at 
> org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.sendEligibleCalls(KafkaAdminClient.java:810)
>   at 
> org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1002)
>   at java.lang.Thread.run(Thread.java:745)
> [2018-01-02 11:21:01,383] ERROR [qtp1875757262-59] Unhandled exception 
> resulting in internal server error response 
> (io.confluent.rest.exceptions.GenericExceptionMapper)
> java.util.concurrent.TimeoutException
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:108)
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:225)
>   at 
> io.confluent.controlcenter.data.KafkaDao.createTopics(KafkaDao.java:85)
>   at 
> io.confluent.controlcenter.rest.KafkaResource.createTopic(KafkaResource.java:87)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
> {noformat}
> the actual error prints immediately, but the adminclient still waits for a 
> timeout and then exposes a TimeoutException to the user
> Note that no other elements of the batch request are performed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6418) AdminClient should handle empty or null topic names better

2018-01-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16337994#comment-16337994
 ] 

ASF GitHub Bot commented on KAFKA-6418:
---

cmccabe opened a new pull request #4470: KAFKA-6418: AdminClient should handle 
empty or null topic names better
URL: https://github.com/apache/kafka/pull/4470
 
 
   AdminClient should return an InvalidTopicException when the topic name is 
empty or null.  Previously, the client would try to serialize these invalid 
topic names, and the whole batch request would fail, including unrelated 
topics.  Also add a unit test.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> AdminClient should handle empty or null topic names better
> --
>
> Key: KAFKA-6418
> URL: https://issues.apache.org/jira/browse/KAFKA-6418
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: dan norwood
>Priority: Major
>
> if you try to `createTopics(Collections.singleton(new NewTopic()));` you will 
> get something like the following:
> {noformat}
> [2018-01-02 11:20:46,481] ERROR [kafka-admin-client-thread | adminclient-3] 
> Uncaught exception in thread 'kafka-admin-client-thread | adminclient-3': 
> (org.apache.kafka.common.utils.KafkaThread)
> org.apache.kafka.common.protocol.types.SchemaException: Error computing size 
> for field 'create_topic_requests': Error computing size for field 'topic': 
> Missing value for field 'topic' which has no default value.
>   at org.apache.kafka.common.protocol.types.Schema.sizeOf(Schema.java:94)
>   at org.apache.kafka.common.protocol.types.Struct.sizeOf(Struct.java:341)
>   at 
> org.apache.kafka.common.requests.AbstractRequestResponse.serialize(AbstractRequestResponse.java:28)
>   at 
> org.apache.kafka.common.requests.AbstractRequest.serialize(AbstractRequest.java:98)
>   at 
> org.apache.kafka.common.requests.AbstractRequest.toSend(AbstractRequest.java:91)
>   at org.apache.kafka.clients.NetworkClient.doSend(NetworkClient.java:423)
>   at org.apache.kafka.clients.NetworkClient.doSend(NetworkClient.java:397)
>   at org.apache.kafka.clients.NetworkClient.send(NetworkClient.java:358)
>   at 
> org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.sendEligibleCalls(KafkaAdminClient.java:810)
>   at 
> org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1002)
>   at java.lang.Thread.run(Thread.java:745)
> [2018-01-02 11:20:46,481] ERROR [kafka-admin-client-thread | adminclient-3] 
> Uncaught exception in thread 'kafka-admin-client-thread | adminclient-3': 
> (org.apache.kafka.common.utils.KafkaThread)
> org.apache.kafka.common.protocol.types.SchemaException: Error computing size 
> for field 'create_topic_requests': Error computing size for field 'topic': 
> Missing value for field 'topic' which has no default value.
>   at org.apache.kafka.common.protocol.types.Schema.sizeOf(Schema.java:94)
>   at org.apache.kafka.common.protocol.types.Struct.sizeOf(Struct.java:341)
>   at 
> org.apache.kafka.common.requests.AbstractRequestResponse.serialize(AbstractRequestResponse.java:28)
>   at 
> org.apache.kafka.common.requests.AbstractRequest.serialize(AbstractRequest.java:98)
>   at 
> org.apache.kafka.common.requests.AbstractRequest.toSend(AbstractRequest.java:91)
>   at org.apache.kafka.clients.NetworkClient.doSend(NetworkClient.java:423)
>   at org.apache.kafka.clients.NetworkClient.doSend(NetworkClient.java:397)
>   at org.apache.kafka.clients.NetworkClient.send(NetworkClient.java:358)
>   at 
> org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.sendEligibleCalls(KafkaAdminClient.java:810)
>   at 
> org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1002)
>   at java.lang.Thread.run(Thread.java:745)
> [2018-01-02 11:21:01,383] ERROR [qtp1875757262-59] Unhandled exception 
> resulting in internal server error response 
> (io.confluent.rest.exceptions.GenericExceptionMapper)
> java.util.concurrent.TimeoutException
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:108)
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:225)
>   at 
> io.confluent.controlcenter.data.KafkaDao.createTopics(KafkaDao.java:85)
>   at 
> io.confluent.controlcenter.rest.KafkaResource.createTopic(KafkaResource.java:87)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 

[jira] [Created] (KAFKA-6479) Broker file descriptor leak after consumer request timeout

2018-01-24 Thread Ryan Leslie (JIRA)
Ryan Leslie created KAFKA-6479:
--

 Summary: Broker file descriptor leak after consumer request timeout
 Key: KAFKA-6479
 URL: https://issues.apache.org/jira/browse/KAFKA-6479
 Project: Kafka
  Issue Type: Bug
  Components: controller
Affects Versions: 1.0.0
Reporter: Ryan Leslie


When a consumer request times out, i.e. takes longer than request.timeout.ms, 
and the client disconnects from the coordinator, the coordinator may leak file 
descriptors. The following code produces this behavior:


{code:java}
Properties config = new Properties();
config.put("bootstrap.servers", BROKERS);
config.put("group.id", "leak-test");
config.put("key.deserializer", 
"org.apache.kafka.common.serialization.StringDeserializer");
config.put("value.deserializer", 
"org.apache.kafka.common.serialization.StringDeserializer");
config.put("max.poll.interval.ms", Integer.MAX_VALUE);
config.put("request.timeout.ms", 12000);

KafkaConsumer consumer1 = new KafkaConsumer<>(config);
KafkaConsumer consumer2 = new KafkaConsumer<>(config);

List topics = Collections.singletonList("leak-test");
consumer1.subscribe(topics);
consumer2.subscribe(topics);

consumer1.poll(100); 
consumer2.poll(100);
{code}

When the above executes, consumer 2 will attempt to rebalance indefinitely 
(blocked by the inactive consumer 1), logging a _Marking the coordinator dead_ 
message every 12 seconds after giving up on the JOIN_GROUP request and 
disconnecting. Unless the consumer exits or times out, this will cause a socket 
in CLOSE_WAIT to leak in the coordinator and the broker will eventually run out 
of file descriptors and crash.

Aside from faulty code as in the example above, or an intentional DoS, any 
client bug causing a consumer to block, e.g. KAFKA-6397, could also result in 
this leak.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6418) AdminClient should handle empty or null topic names better

2018-01-24 Thread Colin P. McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16337951#comment-16337951
 ] 

Colin P. McCabe commented on KAFKA-6418:


The serialization exception would prevent any other part of the request from 
being performed, though.  So we just have to check for empty or null topic 
names and issue an appropriate error.

> AdminClient should handle empty or null topic names better
> --
>
> Key: KAFKA-6418
> URL: https://issues.apache.org/jira/browse/KAFKA-6418
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: dan norwood
>Priority: Major
>
> if you try to `createTopics(Collections.singleton(new NewTopic()));` you will 
> get something like the following:
> {noformat}
> [2018-01-02 11:20:46,481] ERROR [kafka-admin-client-thread | adminclient-3] 
> Uncaught exception in thread 'kafka-admin-client-thread | adminclient-3': 
> (org.apache.kafka.common.utils.KafkaThread)
> org.apache.kafka.common.protocol.types.SchemaException: Error computing size 
> for field 'create_topic_requests': Error computing size for field 'topic': 
> Missing value for field 'topic' which has no default value.
>   at org.apache.kafka.common.protocol.types.Schema.sizeOf(Schema.java:94)
>   at org.apache.kafka.common.protocol.types.Struct.sizeOf(Struct.java:341)
>   at 
> org.apache.kafka.common.requests.AbstractRequestResponse.serialize(AbstractRequestResponse.java:28)
>   at 
> org.apache.kafka.common.requests.AbstractRequest.serialize(AbstractRequest.java:98)
>   at 
> org.apache.kafka.common.requests.AbstractRequest.toSend(AbstractRequest.java:91)
>   at org.apache.kafka.clients.NetworkClient.doSend(NetworkClient.java:423)
>   at org.apache.kafka.clients.NetworkClient.doSend(NetworkClient.java:397)
>   at org.apache.kafka.clients.NetworkClient.send(NetworkClient.java:358)
>   at 
> org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.sendEligibleCalls(KafkaAdminClient.java:810)
>   at 
> org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1002)
>   at java.lang.Thread.run(Thread.java:745)
> [2018-01-02 11:20:46,481] ERROR [kafka-admin-client-thread | adminclient-3] 
> Uncaught exception in thread 'kafka-admin-client-thread | adminclient-3': 
> (org.apache.kafka.common.utils.KafkaThread)
> org.apache.kafka.common.protocol.types.SchemaException: Error computing size 
> for field 'create_topic_requests': Error computing size for field 'topic': 
> Missing value for field 'topic' which has no default value.
>   at org.apache.kafka.common.protocol.types.Schema.sizeOf(Schema.java:94)
>   at org.apache.kafka.common.protocol.types.Struct.sizeOf(Struct.java:341)
>   at 
> org.apache.kafka.common.requests.AbstractRequestResponse.serialize(AbstractRequestResponse.java:28)
>   at 
> org.apache.kafka.common.requests.AbstractRequest.serialize(AbstractRequest.java:98)
>   at 
> org.apache.kafka.common.requests.AbstractRequest.toSend(AbstractRequest.java:91)
>   at org.apache.kafka.clients.NetworkClient.doSend(NetworkClient.java:423)
>   at org.apache.kafka.clients.NetworkClient.doSend(NetworkClient.java:397)
>   at org.apache.kafka.clients.NetworkClient.send(NetworkClient.java:358)
>   at 
> org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.sendEligibleCalls(KafkaAdminClient.java:810)
>   at 
> org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1002)
>   at java.lang.Thread.run(Thread.java:745)
> [2018-01-02 11:21:01,383] ERROR [qtp1875757262-59] Unhandled exception 
> resulting in internal server error response 
> (io.confluent.rest.exceptions.GenericExceptionMapper)
> java.util.concurrent.TimeoutException
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:108)
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:225)
>   at 
> io.confluent.controlcenter.data.KafkaDao.createTopics(KafkaDao.java:85)
>   at 
> io.confluent.controlcenter.rest.KafkaResource.createTopic(KafkaResource.java:87)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
> {noformat}
> the actual error prints immediately, but the adminclient still waits for a 
> timeout and then exposes a TimeoutException to the user
> Note that no other elements of the batch request are performed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6418) AdminClient should handle empty or null topic names better

2018-01-24 Thread Colin P. McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin P. McCabe updated KAFKA-6418:
---
Summary: AdminClient should handle empty or null topic names better  (was: 
adminclient throws timeoutexception when there is a SchemaException)

> AdminClient should handle empty or null topic names better
> --
>
> Key: KAFKA-6418
> URL: https://issues.apache.org/jira/browse/KAFKA-6418
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: dan norwood
>Priority: Major
>
> if you try to `createTopics(Collections.singleton(new NewTopic()));` you will 
> get something like the following:
> {noformat}
> [2018-01-02 11:20:46,481] ERROR [kafka-admin-client-thread | adminclient-3] 
> Uncaught exception in thread 'kafka-admin-client-thread | adminclient-3': 
> (org.apache.kafka.common.utils.KafkaThread)
> org.apache.kafka.common.protocol.types.SchemaException: Error computing size 
> for field 'create_topic_requests': Error computing size for field 'topic': 
> Missing value for field 'topic' which has no default value.
>   at org.apache.kafka.common.protocol.types.Schema.sizeOf(Schema.java:94)
>   at org.apache.kafka.common.protocol.types.Struct.sizeOf(Struct.java:341)
>   at 
> org.apache.kafka.common.requests.AbstractRequestResponse.serialize(AbstractRequestResponse.java:28)
>   at 
> org.apache.kafka.common.requests.AbstractRequest.serialize(AbstractRequest.java:98)
>   at 
> org.apache.kafka.common.requests.AbstractRequest.toSend(AbstractRequest.java:91)
>   at org.apache.kafka.clients.NetworkClient.doSend(NetworkClient.java:423)
>   at org.apache.kafka.clients.NetworkClient.doSend(NetworkClient.java:397)
>   at org.apache.kafka.clients.NetworkClient.send(NetworkClient.java:358)
>   at 
> org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.sendEligibleCalls(KafkaAdminClient.java:810)
>   at 
> org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1002)
>   at java.lang.Thread.run(Thread.java:745)
> [2018-01-02 11:20:46,481] ERROR [kafka-admin-client-thread | adminclient-3] 
> Uncaught exception in thread 'kafka-admin-client-thread | adminclient-3': 
> (org.apache.kafka.common.utils.KafkaThread)
> org.apache.kafka.common.protocol.types.SchemaException: Error computing size 
> for field 'create_topic_requests': Error computing size for field 'topic': 
> Missing value for field 'topic' which has no default value.
>   at org.apache.kafka.common.protocol.types.Schema.sizeOf(Schema.java:94)
>   at org.apache.kafka.common.protocol.types.Struct.sizeOf(Struct.java:341)
>   at 
> org.apache.kafka.common.requests.AbstractRequestResponse.serialize(AbstractRequestResponse.java:28)
>   at 
> org.apache.kafka.common.requests.AbstractRequest.serialize(AbstractRequest.java:98)
>   at 
> org.apache.kafka.common.requests.AbstractRequest.toSend(AbstractRequest.java:91)
>   at org.apache.kafka.clients.NetworkClient.doSend(NetworkClient.java:423)
>   at org.apache.kafka.clients.NetworkClient.doSend(NetworkClient.java:397)
>   at org.apache.kafka.clients.NetworkClient.send(NetworkClient.java:358)
>   at 
> org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.sendEligibleCalls(KafkaAdminClient.java:810)
>   at 
> org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1002)
>   at java.lang.Thread.run(Thread.java:745)
> [2018-01-02 11:21:01,383] ERROR [qtp1875757262-59] Unhandled exception 
> resulting in internal server error response 
> (io.confluent.rest.exceptions.GenericExceptionMapper)
> java.util.concurrent.TimeoutException
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:108)
>   at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:225)
>   at 
> io.confluent.controlcenter.data.KafkaDao.createTopics(KafkaDao.java:85)
>   at 
> io.confluent.controlcenter.rest.KafkaResource.createTopic(KafkaResource.java:87)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
> {noformat}
> the actual error prints immediately, but the adminclient still waits for a 
> timeout and then exposes a TimeoutException to the user



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6418) AdminClient should handle empty or null topic names better

2018-01-24 Thread Colin P. McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin P. McCabe updated KAFKA-6418:
---
Description: 
if you try to `createTopics(Collections.singleton(new NewTopic()));` you will 
get something like the following:
{noformat}
[2018-01-02 11:20:46,481] ERROR [kafka-admin-client-thread | adminclient-3] 
Uncaught exception in thread 'kafka-admin-client-thread | adminclient-3': 
(org.apache.kafka.common.utils.KafkaThread)
org.apache.kafka.common.protocol.types.SchemaException: Error computing size 
for field 'create_topic_requests': Error computing size for field 'topic': 
Missing value for field 'topic' which has no default value.
at org.apache.kafka.common.protocol.types.Schema.sizeOf(Schema.java:94)
at org.apache.kafka.common.protocol.types.Struct.sizeOf(Struct.java:341)
at 
org.apache.kafka.common.requests.AbstractRequestResponse.serialize(AbstractRequestResponse.java:28)
at 
org.apache.kafka.common.requests.AbstractRequest.serialize(AbstractRequest.java:98)
at 
org.apache.kafka.common.requests.AbstractRequest.toSend(AbstractRequest.java:91)
at org.apache.kafka.clients.NetworkClient.doSend(NetworkClient.java:423)
at org.apache.kafka.clients.NetworkClient.doSend(NetworkClient.java:397)
at org.apache.kafka.clients.NetworkClient.send(NetworkClient.java:358)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.sendEligibleCalls(KafkaAdminClient.java:810)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1002)
at java.lang.Thread.run(Thread.java:745)
[2018-01-02 11:20:46,481] ERROR [kafka-admin-client-thread | adminclient-3] 
Uncaught exception in thread 'kafka-admin-client-thread | adminclient-3': 
(org.apache.kafka.common.utils.KafkaThread)
org.apache.kafka.common.protocol.types.SchemaException: Error computing size 
for field 'create_topic_requests': Error computing size for field 'topic': 
Missing value for field 'topic' which has no default value.
at org.apache.kafka.common.protocol.types.Schema.sizeOf(Schema.java:94)
at org.apache.kafka.common.protocol.types.Struct.sizeOf(Struct.java:341)
at 
org.apache.kafka.common.requests.AbstractRequestResponse.serialize(AbstractRequestResponse.java:28)
at 
org.apache.kafka.common.requests.AbstractRequest.serialize(AbstractRequest.java:98)
at 
org.apache.kafka.common.requests.AbstractRequest.toSend(AbstractRequest.java:91)
at org.apache.kafka.clients.NetworkClient.doSend(NetworkClient.java:423)
at org.apache.kafka.clients.NetworkClient.doSend(NetworkClient.java:397)
at org.apache.kafka.clients.NetworkClient.send(NetworkClient.java:358)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.sendEligibleCalls(KafkaAdminClient.java:810)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1002)
at java.lang.Thread.run(Thread.java:745)
[2018-01-02 11:21:01,383] ERROR [qtp1875757262-59] Unhandled exception 
resulting in internal server error response 
(io.confluent.rest.exceptions.GenericExceptionMapper)
java.util.concurrent.TimeoutException
at 
org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:108)
at 
org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:225)
at 
io.confluent.controlcenter.data.KafkaDao.createTopics(KafkaDao.java:85)
at 
io.confluent.controlcenter.rest.KafkaResource.createTopic(KafkaResource.java:87)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
{noformat}

the actual error prints immediately, but the adminclient still waits for a 
timeout and then exposes a TimeoutException to the user

Note that no other elements of the batch request are performed.

  was:
if you try to `createTopics(Collections.singleton(new NewTopic()));` you will 
get something like the following:
{noformat}
[2018-01-02 11:20:46,481] ERROR [kafka-admin-client-thread | adminclient-3] 
Uncaught exception in thread 'kafka-admin-client-thread | adminclient-3': 
(org.apache.kafka.common.utils.KafkaThread)
org.apache.kafka.common.protocol.types.SchemaException: Error computing size 
for field 'create_topic_requests': Error computing size for field 'topic': 
Missing value for field 'topic' which has no default value.
at org.apache.kafka.common.protocol.types.Schema.sizeOf(Schema.java:94)
at org.apache.kafka.common.protocol.types.Struct.sizeOf(Struct.java:341)
at 

[jira] [Updated] (KAFKA-6475) ConfigException on the broker results in UnknownServerException in the admin client

2018-01-24 Thread Colin P. McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin P. McCabe updated KAFKA-6475:
---
Priority: Minor  (was: Major)

> ConfigException on the broker results in UnknownServerException in the admin 
> client
> ---
>
> Key: KAFKA-6475
> URL: https://issues.apache.org/jira/browse/KAFKA-6475
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Xavier Léauté
>Assignee: Colin P. McCabe
>Priority: Minor
>
> Calling AdminClient.alterConfigs with an invalid configuration may cause 
> ConfigException to be thrown on the broker side, which results in an 
> UnknownServerException thrown by the admin client. It would probably make 
> more sense for the admin client to throw InvalidConfigurationException in 
> that case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6475) ConfigException on the broker results in UnknownServerException in the admin client

2018-01-24 Thread Colin P. McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16337904#comment-16337904
 ] 

Colin P. McCabe commented on KAFKA-6475:


It's unclear how this is happening, since {{AdminManager#alterConfigs}} seems 
to catch {{ConfigException}} and translates it into {{InvalidRequestException}}.
{code:java}
  def alterConfigs(configs: Map[Resource, AlterConfigsRequest.Config], 
validateOnly: Boolean): Map[Resource, ApiError] = {
configs.map { case (resource, config) =>
...
  try {
resource.`type` match {
... alter configs implementation ...
  } catch {
    case e @ (_: ConfigException | _: IllegalArgumentException) =>
  val message = s"Invalid config value for resource $resource: 
${e.getMessage}"
  info(message)
  resource -> ApiError.fromThrowable(new 
InvalidRequestException(message, e))

  }
{code}

Do you have a code example that reproduces the problem?

> ConfigException on the broker results in UnknownServerException in the admin 
> client
> ---
>
> Key: KAFKA-6475
> URL: https://issues.apache.org/jira/browse/KAFKA-6475
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Xavier Léauté
>Assignee: Colin P. McCabe
>Priority: Major
>
> Calling AdminClient.alterConfigs with an invalid configuration may cause 
> ConfigException to be thrown on the broker side, which results in an 
> UnknownServerException thrown by the admin client. It would probably make 
> more sense for the admin client to throw InvalidConfigurationException in 
> that case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-4292) KIP-86: Configurable SASL callback handlers

2018-01-24 Thread Rajini Sivaram (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram updated KAFKA-4292:
--
Fix Version/s: 1.1.0

> KIP-86: Configurable SASL callback handlers
> ---
>
> Key: KAFKA-4292
> URL: https://issues.apache.org/jira/browse/KAFKA-4292
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.10.1.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Major
> Fix For: 1.1.0
>
>
> Implementation of KIP-86: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-86%3A+Configurable+SASL+callback+handlers



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6312) Add documentation about kafka-consumer-groups.sh's ability to set/change offsets

2018-01-24 Thread Mayank Tankhiwale (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16337719#comment-16337719
 ] 

Mayank Tankhiwale commented on KAFKA-6312:
--

[~prasanna1433] if you are not looking into this one, I'd like to assign this 
to myself.

> Add documentation about kafka-consumer-groups.sh's ability to set/change 
> offsets
> 
>
> Key: KAFKA-6312
> URL: https://issues.apache.org/jira/browse/KAFKA-6312
> Project: Kafka
>  Issue Type: Improvement
>  Components: documentation
>Reporter: James Cheng
>Priority: Major
>  Labels: newbie
>
> KIP-122 added the ability for kafka-consumer-groups.sh to reset/change 
> consumer offsets, at a fine grained level.
> There is documentation on it in the kafka-consumer-groups.sh usage text. 
> There is no such documentation on the kafka.apache.org website. We should add 
> some documentation to the website, so that users can read about the 
> functionality without having the tools installed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6457) Error: NOT_LEADER_FOR_PARTITION leads to NPE

2018-01-24 Thread Seweryn Habdank-Wojewodzki (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16337600#comment-16337600
 ] 

Seweryn Habdank-Wojewodzki commented on KAFKA-6457:
---

Questions:

AbstractCoordinator line 363:
RequestFuture future = initiateJoinGroup();
client.poll(future);

How it is guarantied, that future is never null?

line 402:
joinFuture = sendJoinGroupRequest();

How it is guarantied that joinFuture is not null?

> Error: NOT_LEADER_FOR_PARTITION leads to NPE
> 
>
> Key: KAFKA-6457
> URL: https://issues.apache.org/jira/browse/KAFKA-6457
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 1.0.0
>Reporter: Seweryn Habdank-Wojewodzki
>Priority: Major
>
> One of our nodes was dead. Then the second one tooks all responsibility.
> But streamming aplication in the meanwhile crashed due to NPE caused by 
> {{Error: NOT_LEADER_FOR_PARTITION}}.
> The stack trace is below.
>  
> Is it something expected?
>  
> {code:java}
> 2018-01-17 11:47:21 [my] [WARN ] Sender:251 - [Producer ...2018-01-17 
> 11:47:21 [my] [WARN ] Sender:251 - [Producer 
> clientId=restreamer-my-fef07ca9-b067-45c0-a5af-68b5a1730dac-StreamThread-1-producer]
>  Got error produce response with correlation id 768962 on topic-partition 
> my_internal_topic-5, retrying (9 attempts left). Error: 
> NOT_LEADER_FOR_PARTITION
> 2018-01-17 11:47:21 [my] [WARN ] Sender:251 - [Producer 
> clientId=restreamer-my-fef07ca9-b067-45c0-a5af-68b5a1730dac-StreamThread-1-producer]
>  Got error produce response with correlation id 768962 on topic-partition 
> my_internal_topic-7, retrying (9 attempts left). Error: 
> NOT_LEADER_FOR_PARTITION
> 2018-01-17 11:47:21 [my] [ERROR] AbstractCoordinator:296 - [Consumer 
> clientId=restreamer-my-fef07ca9-b067-45c0-a5af-68b5a1730dac-StreamThread-1-consumer,
>  groupId=restreamer-my] Heartbeat thread for group restreamer-my failed due 
> to unexpected error
> java.lang.NullPointerException: null
>     at 
> org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:436) 
> ~[my-restreamer.jar:?]
>     at org.apache.kafka.common.network.Selector.poll(Selector.java:395) 
> ~[my-restreamer.jar:?]
>     at 
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:460) 
> ~[my-restreamer.jar:?]
>     at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:238)
>  ~[my-restreamer.jar:?]
>     at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:275)
>  ~[my-restreamer.jar:?]
>     at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:934)
>  [my-restreamer.jar:?]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6457) Error: NOT_LEADER_FOR_PARTITION leads to NPE

2018-01-24 Thread Seweryn Habdank-Wojewodzki (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16337422#comment-16337422
 ] 

Seweryn Habdank-Wojewodzki commented on KAFKA-6457:
---

All those tickets: KAFKA-6459, KAFKA-6457, KAFKA-5882  might be connected as 
more less something is wrong when rebalancing (or new leader is voted) happens 
in AbstractCoordinator. There are some places in code where objects are used, 
but they are {{null}}. And this class is common for all my stack traces. It is 
ofcourse question if AbstractCoordinator wrongly handles {{null}} or null shall 
never appead in those places, but the underlying class, which provides objects, 
deletes/resets to null them and shall not provided those objects.

> Error: NOT_LEADER_FOR_PARTITION leads to NPE
> 
>
> Key: KAFKA-6457
> URL: https://issues.apache.org/jira/browse/KAFKA-6457
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 1.0.0
>Reporter: Seweryn Habdank-Wojewodzki
>Priority: Major
>
> One of our nodes was dead. Then the second one tooks all responsibility.
> But streamming aplication in the meanwhile crashed due to NPE caused by 
> {{Error: NOT_LEADER_FOR_PARTITION}}.
> The stack trace is below.
>  
> Is it something expected?
>  
> {code:java}
> 2018-01-17 11:47:21 [my] [WARN ] Sender:251 - [Producer ...2018-01-17 
> 11:47:21 [my] [WARN ] Sender:251 - [Producer 
> clientId=restreamer-my-fef07ca9-b067-45c0-a5af-68b5a1730dac-StreamThread-1-producer]
>  Got error produce response with correlation id 768962 on topic-partition 
> my_internal_topic-5, retrying (9 attempts left). Error: 
> NOT_LEADER_FOR_PARTITION
> 2018-01-17 11:47:21 [my] [WARN ] Sender:251 - [Producer 
> clientId=restreamer-my-fef07ca9-b067-45c0-a5af-68b5a1730dac-StreamThread-1-producer]
>  Got error produce response with correlation id 768962 on topic-partition 
> my_internal_topic-7, retrying (9 attempts left). Error: 
> NOT_LEADER_FOR_PARTITION
> 2018-01-17 11:47:21 [my] [ERROR] AbstractCoordinator:296 - [Consumer 
> clientId=restreamer-my-fef07ca9-b067-45c0-a5af-68b5a1730dac-StreamThread-1-consumer,
>  groupId=restreamer-my] Heartbeat thread for group restreamer-my failed due 
> to unexpected error
> java.lang.NullPointerException: null
>     at 
> org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:436) 
> ~[my-restreamer.jar:?]
>     at org.apache.kafka.common.network.Selector.poll(Selector.java:395) 
> ~[my-restreamer.jar:?]
>     at 
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:460) 
> ~[my-restreamer.jar:?]
>     at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:238)
>  ~[my-restreamer.jar:?]
>     at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:275)
>  ~[my-restreamer.jar:?]
>     at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:934)
>  [my-restreamer.jar:?]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6478) kafka-run-class.bat fails if CLASSPATH contains spaces

2018-01-24 Thread Bert Roos (JIRA)
Bert Roos created KAFKA-6478:


 Summary: kafka-run-class.bat fails if CLASSPATH contains spaces
 Key: KAFKA-6478
 URL: https://issues.apache.org/jira/browse/KAFKA-6478
 Project: Kafka
  Issue Type: Bug
  Components: tools
Affects Versions: 1.0.0
Reporter: Bert Roos


If the CLASSPATH environment variable contains spaces, script 
{{kafka-run-class.bat}} fails to start.

The easy solution is to put quotes around it. See [PR 
#4469|https://github.com/apache/kafka/pull/4469] for a fix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)