[jira] [Resolved] (KAFKA-4295) kafka-console-consumer.sh does not delete the temporary group in zookeeper

2018-06-15 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-4295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4295.
--
Resolution: Auto Closed

Closing inactive issue. The old consumer is no longer supported.

> kafka-console-consumer.sh does not delete the temporary group in zookeeper
> --
>
> Key: KAFKA-4295
> URL: https://issues.apache.org/jira/browse/KAFKA-4295
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 0.9.0.1, 0.10.0.0, 0.10.0.1
>Reporter: Sswater Shi
>Assignee: huxihx
>Priority: Minor
>
> I'm not sure it is a bug or you guys designed it.
> Since 0.9.x.x, the kafka-console-consumer.sh will not delete the group 
> information in zookeeper/consumers on exit when without "--new-consumer". 
> There will be a lot of abandoned zookeeper/consumers/console-consumer-xxx if 
> kafka-console-consumer.sh runs a lot of times.
> When 0.8.x.x,  the kafka-console-consumer.sh can be followed by an argument 
> "group". If not specified, the kafka-console-consumer.sh will create a 
> temporary group name like 'console-consumer-'. If the group name is 
> specified by "group", the information in the zookeeper/consumers will be kept 
> on exit. If the group name is a temporary one, the information in the 
> zookeeper will be deleted when kafka-console-consumer.sh is quitted by 
> Ctrl+C. Why this is changed from 0.9.x.x.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-5691) ZKUtils.CreateRecursive should handle NOAUTH

2018-06-15 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5691.
--
Resolution: Auto Closed

Scala consumers and related tools/tests are removed in KAFKA-2983. This change 
may not be required now.  Please reopen if you think otherwise

 

> ZKUtils.CreateRecursive should handle NOAUTH
> 
>
> Key: KAFKA-5691
> URL: https://issues.apache.org/jira/browse/KAFKA-5691
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ryan P
>Assignee: Ryan P
>Priority: Major
>
> Old consumers are unable to register themselves with secured ZK installations 
> because a NOATH code is returned when attempting to create `/consumers'. 
> Rather than failing Kafka should log the error and continue down the path 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-5590) Delete Kafka Topic Complete Failed After Enable Ranger Kafka Plugin

2018-06-15 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5590.
--
Resolution: Information Provided

{color:#00}Closing inactive issue. Please reopen if you think the issue 
still exists{color}
 

> Delete Kafka Topic Complete Failed After Enable Ranger Kafka Plugin
> ---
>
> Key: KAFKA-5590
> URL: https://issues.apache.org/jira/browse/KAFKA-5590
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.10.0.0
> Environment: kafka and ranger under ambari
>Reporter: Chaofeng Zhao
>Priority: Major
>
> Hi:
> Recently I develop some applications about kafka under ranger. But when I 
> set enable ranger kafka plugin I can not delete kafka topic completely even 
> though set 'delete.topic.enable=true'. And I find when enable ranger kafka 
> plugin it must be authrized. How can I delete kafka topic completely under 
> ranger. Thank you.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-5408) Using the ConsumerRecord instead of BaseConsumerRecord in the ConsoleConsumer

2018-06-15 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5408.
--
Resolution: Fixed

This is taken care in KAFKA-2983

> Using the ConsumerRecord instead of BaseConsumerRecord in the ConsoleConsumer
> -
>
> Key: KAFKA-5408
> URL: https://issues.apache.org/jira/browse/KAFKA-5408
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Paolo Patierno
>Priority: Minor
>
> Hi,
> because the BaseConsumerRecord is marked as deprecated and will be removed in 
> future versions, it could worth to start removing its usage in the 
> ConsoleConsumer. 
> If it makes sense to you, I'd like to work on that starting to contribute to 
> the project.
> Thanks,
> Paolo.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-4723) offsets.storage=kafka - groups stuck in rebalancing with committed offsets

2018-06-15 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-4723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4723.
--
Resolution: Auto Closed

Closing inactive issue. The old consumer is no longer supported. Please upgrade 
to the Java consumer whenever possible.

> offsets.storage=kafka - groups stuck in rebalancing with committed offsets
> --
>
> Key: KAFKA-4723
> URL: https://issues.apache.org/jira/browse/KAFKA-4723
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.1
>Reporter: biker73
>Priority: Minor
>
> Hi, I have moved offset store to kafka only, when I now run;
>  bin/kafka-consumer-groups.sh  --bootstrap-server localhost:9094 --describe  
> --new-consumer --group my_consumer_group
> I get the message;
> Consumer group `my_consumer_group` does not exist or is rebalancing.
> I have found the  issue KAFKA-3144 however this refers to consumer groups 
> that have no committed offsets, the groups I am looking do and are constantly 
> in use.
> using --list I get all my consumer groups returned. Although some are 
> inactive I have around 6 very active ones (millions of messages a day 
> constantly). looking at the mbean data and kafka tool etc I can see the lags 
> and offsets changing every second. Therefore I would expect the 
> kafka-consumer-groups.sh script to return the lags and offsets for all 6 
> active consumer groups.
> I think what has happened is when I moved offset storage to kafka from 
> zookeeper (and then disabled sending to both), something has got confused.  
> Querying zookeeper I get the offsets for the alleged missing consumer groups 
> - but they should be stored and committed to kafka.
> Many thanks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6314) Add a tool to delete kafka based consumer offsets for a given group

2018-06-19 Thread Manikumar (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517410#comment-16517410
 ] 

Manikumar commented on KAFKA-6314:
--

"–delete" works with java consumer groups also. This option just deletes all 
the group information and associated offsets.

> Add a tool to delete kafka based consumer offsets for a given group
> ---
>
> Key: KAFKA-6314
> URL: https://issues.apache.org/jira/browse/KAFKA-6314
> Project: Kafka
>  Issue Type: New Feature
>  Components: consumer, core, tools
>Reporter: Tom Scott
>Priority: Minor
>
> Add a tool to delete kafka based consumer offsets for a given group similar 
> to the reset tool. It could look something like this:
> kafka-consumer-groups --bootstrap-server localhost:9092 --delete-offsets 
> --group somegroup
> The case for this is as follows:
> 1. Consumer group with id: group1 subscribes to topic1
> 2. The group is stopped 
> 3. The subscription changed to topic2 but the id is kept as group1
> Now the out output of kafka-consumer-groups --describe for the group will 
> show topic1 even though the group is not subscribed to that topic. This is 
> bad for monitoring as it will show lag on topic1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6344) 0.8.2 clients will store invalid configuration in ZK for Kafka 1.0 brokers

2018-06-19 Thread Manikumar (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517430#comment-16517430
 ] 

Manikumar commented on KAFKA-6344:
--

This accidentally got fixed in 1.1.0.

> 0.8.2 clients will store invalid configuration in ZK for Kafka 1.0 brokers
> --
>
> Key: KAFKA-6344
> URL: https://issues.apache.org/jira/browse/KAFKA-6344
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Vincent Bernat
>Priority: Major
>
> Hello,
> When using a Kafka 0.8.2 Scala client, the "changeTopicConfig" method from 
> AdminUtils will write the topic name to /config/changes/config_change_X. 
> Since 0.9, it is expected to have a JSON string and brokers will bail out if 
> it is not the case with a java.lang.IllegalArgumentException with message 
> "Config change notification has an unexpected value. The format 
> is:{\"version\" : 1, \"entity_type\":\"topics/clients\", \"entity_name\" : 
> \"topic_name/client_id\"} or {\"version\" : 2, 
> \"entity_path\":\"entity_type/entity_name\"}. Received: \"dns\"". Moreover, 
> the broker will shutdown after this error.
> As 1.0 brokers are expected to accept 0.8.x clients, either highlight in the 
> documentation this doesn't apply to AdminUtils or accept this "version 0" 
> format.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6485) 'ConsumerGroupCommand' performance optimization for old consumer describe group

2018-06-19 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-6485.
--
Resolution: Auto Closed

The Scala consumers are deprecated. The old clients and related tools are 
removed in  KAFKA-2983

> 'ConsumerGroupCommand' performance optimization for old consumer describe 
> group
> ---
>
> Key: KAFKA-6485
> URL: https://issues.apache.org/jira/browse/KAFKA-6485
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin
>Affects Versions: 1.0.0
>Reporter: HongLiang
>Priority: Major
> Attachments: 1ada9efa5-850a-4495-83b6-9a28f8885512.png, 
> 75a51a9f-d2c3-47fa-81d3-519343aa4d84.png, ConsumerGroupCommand.diff
>
>
> ConsumerGroupCommand describegroup performance optimization.
> performance improvement 3 times compare trunk(1.0+). and performance 
> improvement 10 times compare 0.10.2.1
> `./kafka-consumer-groups.sh --zookeeper 127.0.0.1 --group 
> cp-mirror-consumer-group --describe
> `



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-2550) [Kafka][0.8.2.1][Performance]When there are a lot of partition under a Topic, there are serious performance degradation.

2018-06-12 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-2550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2550.
--
Resolution: Auto Closed

{color:#00}Closing inactive issue. Old clients are deprecated. Please 
reopen if you think the issue still exists in newer versions.{color}
 

> [Kafka][0.8.2.1][Performance]When there are a lot of partition under a Topic, 
> there are serious performance degradation.
> 
>
> Key: KAFKA-2550
> URL: https://issues.apache.org/jira/browse/KAFKA-2550
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer, producer 
>Affects Versions: 0.8.2.1
>Reporter: yanwei
>Assignee: Neha Narkhede
>Priority: Major
>
> Because of business need to create a large number of partitions,I test the 
> partition number of support.
> But I find When there are a lot of partition under a Topic, there are serious 
> performance degradation.
> Through the analysis, in addition to the hard disk is bottleneck, the client 
> is the bottleneck
> I use JProfile,producer and consumer 100 message(msg size:500byte)
> 1、Consumer high level API:(I find i can't upload picture?)
>  ZookeeperConsumerConnector.scala-->rebalance
> -->val assignmentContext = new AssignmentContext(group, consumerIdString, 
> config.excludeInternalTopics, zkClient)
> -->ZkUtils.getPartitionsForTopics(zkClient, myTopicThreadIds.keySet.toSeq)
> -->getPartitionAssignmentForTopics
> -->Json.parseFull(jsonPartitionMap) 
>  1) one topic 400 partion:
>  JProfile:48.6% cpu run time
>  2) ont topic 3000 partion:
>  JProfile:97.8% cpu run time
>   Maybe the file(jsonPartitionMap) is very big lead to parse is very slow.
>   But this function is executed only once, so the problem should not be too 
> big.
> 2、Producer Scala API:
> BrokerPartitionInfo.scala--->getBrokerPartitionInfo:
> partitionMetadata.map { m =>
>   m.leader match {
> case Some(leader) =>
>   //y00163442 delete log print
>   debug("Partition [%s,%d] has leader %d".format(topic, 
> m.partitionId, leader.id))
>   new PartitionAndLeader(topic, m.partitionId, Some(leader.id))
> case None =>
>   //y00163442 delete log print
>   //debug("Partition [%s,%d] does not have a leader 
> yet".format(topic, m.partitionId))
>   new PartitionAndLeader(topic, m.partitionId, None)
>   }
> }.sortWith((s, t) => s.partitionId < t.partitionId) 
>  
>   When partitions number>25,the function 'format' cpu run time is 44.8%.
>   Nearly half of the time consumption in the format function.whether the 
> log print open, this format will be executed.Led to the decrease of the TPS 
> for five times(25000--->5000).
>   
> 3、Producer JAVA client(clients module):
>   function:org.apache.kafka.clients.producer.KafkaProducer.send
>   I find the function 'send' cpu run time  rise with the rising number of 
> partitions ,when partions is 5000,the cpu run time is 60.8.
>   Because Kafka broker side of CPU, memory, disk, the network didn't 
> reach the bottleneck , No matter request.required.acks is set to 0 or 1, the 
> results are similar, I doubt the send there may be some bottlenecks.
>   
> Very unfortunately to upload pictures don't succeed, can't see the results.
> My test results, for a single server, a single hard disk can support 1000 
> partitions, 7 hard disk can support 3000 partitions.If can solve the 
> bottleneck for the client, then seven hard disk I estimate that can support 
> more partitions.
> Actual production configuration, could be more partitions configuration under 
> more than one TOPIC,Things could be better.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-2445) Failed test: kafka.producer.ProducerTest > testSendWithDeadBroker FAILED

2018-06-07 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-2445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2445.
--
Resolution: Auto Closed

> Failed test: kafka.producer.ProducerTest > testSendWithDeadBroker FAILED
> 
>
> Key: KAFKA-2445
> URL: https://issues.apache.org/jira/browse/KAFKA-2445
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Priority: Major
>
> This test failed on Jenkins build: 
> https://builds.apache.org/job/Kafka-trunk/590/console
> kafka.producer.ProducerTest > testSendWithDeadBroker FAILED
> java.lang.AssertionError: Message set should have 1 message
> at org.junit.Assert.fail(Assert.java:92)
> at org.junit.Assert.assertTrue(Assert.java:44)
> at 
> kafka.producer.ProducerTest.testSendWithDeadBroker(ProducerTest.scala:260)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-6973) setting invalid timestamp causes Kafka broker restart to fail

2018-05-31 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-6973:


Assignee: Manikumar

> setting invalid timestamp causes Kafka broker restart to fail
> -
>
> Key: KAFKA-6973
> URL: https://issues.apache.org/jira/browse/KAFKA-6973
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 1.1.0
>Reporter: Paul Brebner
>Assignee: Manikumar
>Priority: Critical
>
> Setting timestamp to invalid value causes Kafka broker to fail upon startup. 
> E.g.
> ./kafka-topics.sh --create --zookeeper localhost --topic duck3 --partitions 1 
> --replication-factor 1 --config message.timestamp.type=boom
>  
> Also note that the docs says the parameter name is 
> log.message.timestamp.type, but this is silently ignored.
> This works with no error for the invalid timestamp value. But next time you 
> restart Kafka:
>  
> [2018-05-29 13:09:05,806] FATAL [KafkaServer id=0] Fatal error during 
> KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
> java.util.NoSuchElementException: Invalid timestamp type boom
> at org.apache.kafka.common.record.TimestampType.forName(TimestampType.java:39)
> at kafka.log.LogConfig.(LogConfig.scala:94)
> at kafka.log.LogConfig$.fromProps(LogConfig.scala:279)
> at kafka.log.LogManager$$anonfun$17.apply(LogManager.scala:786)
> at kafka.log.LogManager$$anonfun$17.apply(LogManager.scala:785)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:221)
> at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428)
> at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:428)
> at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
> at scala.collection.AbstractTraversable.map(Traversable.scala:104)
> at kafka.log.LogManager$.apply(LogManager.scala:785)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:222)
> at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
> at kafka.Kafka$.main(Kafka.scala:92)
> at kafka.Kafka.main(Kafka.scala)
> [2018-05-29 13:09:05,811] INFO [KafkaServer id=0] shutting down 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-5905) Remove PrincipalBuilder and DefaultPrincipalBuilder

2018-05-27 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-5905:


Assignee: Manikumar

> Remove PrincipalBuilder and DefaultPrincipalBuilder
> ---
>
> Key: KAFKA-5905
> URL: https://issues.apache.org/jira/browse/KAFKA-5905
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Manikumar
>Priority: Major
> Fix For: 2.0.0
>
>
> These classes were deprecated after KIP-189: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-189%3A+Improve+principal+builder+interface+and+add+support+for+SASL,
>  which is part of 1.0.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6835) Enable topic unclean leader election to be enabled without controller change

2018-05-27 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-6835:
-
Fix Version/s: 2.0.0

> Enable topic unclean leader election to be enabled without controller change
> 
>
> Key: KAFKA-6835
> URL: https://issues.apache.org/jira/browse/KAFKA-6835
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Rajini Sivaram
>Assignee: Manikumar
>Priority: Major
> Fix For: 2.0.0
>
>
> Dynamic update of broker's default unclean.leader.election.enable will be 
> processed without controller change (KAFKA-6526). We should probably do the 
> same for topic overrides as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6751) Make max.connections.per.ip.overrides a dynamic config

2018-05-27 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-6751:
-
Fix Version/s: 2.1.0

> Make max.connections.per.ip.overrides a dynamic config
> --
>
> Key: KAFKA-6751
> URL: https://issues.apache.org/jira/browse/KAFKA-6751
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Manikumar
>Priority: Major
>  Labels: needs-kip
> Fix For: 2.1.0
>
>
> It might be useful to be able to update this config dynamically since we 
> occasionally run into situations where a particular host (or set of hosts) is 
> causing some trouble for the broker.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6961) UnknownTopicOrPartitionException & NotLeaderForPartitionException upon replication of topics.

2018-05-29 Thread Manikumar (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16493167#comment-16493167
 ] 

Manikumar commented on KAFKA-6961:
--

Are you observing these error logs continuously? If they are only seen during 
topic creation,  then you can ignore these transient errors.
If they are frequent, you can investigate  for any frequent leader change, ISR 
shrink/expand etc..

> UnknownTopicOrPartitionException & NotLeaderForPartitionException upon 
> replication of topics.
> -
>
> Key: KAFKA-6961
> URL: https://issues.apache.org/jira/browse/KAFKA-6961
> Project: Kafka
>  Issue Type: Bug
> Environment: kubernetes cluster kafka.
>Reporter: kaushik srinivas
>Priority: Major
> Attachments: k8s_NotLeaderForPartition.txt, k8s_replication_errors.txt
>
>
> Running kafka & zookeeper in kubernetes cluster.
> No of brokers : 3
> No of partitions per topic : 3
> creating topic with 3 partitions, and looks like all the partitions are up.
> Below is the snapshot to confirm the same,
> Topic:applestore  PartitionCount:3  ReplicationFactor:3   Configs:
>  Topic: applestore  Partition: 0Leader: 1001Replicas: 
> 1001,1003,1002Isr: 1001,1003,1002
>  Topic: applestore  Partition: 1Leader: 1002Replicas: 
> 1002,1001,1003Isr: 1002,1001,1003
>  Topic: applestore  Partition: 2Leader: 1003Replicas: 
> 1003,1002,1001Isr: 1003,1002,1001
>  
> But, we see in the brokers as soon as the topics are created below stack 
> traces appears,
>  
> error 1: 
> [2018-05-28 08:00:31,875] ERROR [ReplicaFetcher replicaId=1001, 
> leaderId=1003, fetcherId=7] Error for partition applestore-2 to broker 
> 1003:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
>  
> error 2 :
> [2018-05-28 00:43:20,993] ERROR [ReplicaFetcher replicaId=1003, 
> leaderId=1001, fetcherId=0] Error for partition apple-6 to broker 
> 1001:org.apache.kafka.common.errors.NotLeaderForPartitionException: This 
> server is not the leader for that topic-partition. 
> (kafka.server.ReplicaFetcherThread)
>  
> When we tries producing records to each specific partition, it works fine and 
> also log size across the replicated brokers appears to be equal, which means 
> replication is happening fine.
> Attaching the two stack trace files.
>  
> Why are these stack traces appearing ? can we ignore these stack traces if 
> its some spam messages ?
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-2556) Corrupted index on log.dir updates

2018-05-29 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-2556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2556.
--
Resolution: Not A Problem

Closing inactive issue.  Kafka log.dir contains topic  data and metadata 
information. We cannot arbitrarily replace the log.dir contents.  Please reopen 
if you think the issue still exists

> Corrupted index on log.dir updates
> --
>
> Key: KAFKA-2556
> URL: https://issues.apache.org/jira/browse/KAFKA-2556
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.1
>Reporter: Denis
>Priority: Major
>
> Partition is corrupted when user updates server configuration.
> Topic would be corrupted if two or more `log.dir` directories contains 
> segments for the same topic after configuration changes.
> Steps to reproduce:
> * Start Kafka service with several `log.dir` directories
> * Stop Kafka service
> * Update configuration. Remove one of `log.dir` directories
> * Start Kafka service. Kafka creates another directory for the partition in 
> another directory
> * Stop Kafka service
> * Update configuration. Restore the folder we removed on the third step
> * Start Kafka process
> Result:
> {code}
> [2015-09-18 10:28:48,764] INFO Verifying properties 
> (kafka.utils.VerifiableProperties)
> [2015-09-18 10:28:48,801] INFO Property auto.create.topics.enable is 
> overridden to false (kafka.utils.VerifiableProperties)
> [2015-09-18 10:28:48,801] INFO Property auto.leader.rebalance.enable is 
> overridden to false (kafka.utils.VerifiableProperties)
> [2015-09-18 10:28:48,801] INFO Property broker.id is overridden to 6 
> (kafka.utils.VerifiableProperties)
> [2015-09-18 10:28:48,802] INFO Property default.replication.factor is 
> overridden to 1 (kafka.utils.VerifiableProperties)
> [2015-09-18 10:28:48,802] INFO Property host.name is overridden to 
> inferno07.chi.net (kafka.utils.VerifiableProperties)
> [2015-09-18 10:28:48,803] WARN Property kafka.metrics.graphite.group is not 
> valid (kafka.utils.VerifiableProperties)
> [2015-09-18 10:28:48,803] WARN Property kafka.metrics.graphite.host is not 
> valid (kafka.utils.VerifiableProperties)
> [2015-09-18 10:28:48,803] WARN Property kafka.metrics.graphite.port is not 
> valid (kafka.utils.VerifiableProperties)
> [2015-09-18 10:28:48,803] WARN Property kafka.metrics.graphite.regex is not 
> valid (kafka.utils.VerifiableProperties)
> [2015-09-18 10:28:48,803] WARN Property kafka.metrics.polling.interval.secs 
> is not valid (kafka.utils.VerifiableProperties)
> [2015-09-18 10:28:48,804] WARN Property kafka.metrics.reporters is not valid 
> (kafka.utils.VerifiableProperties)
> [2015-09-18 10:28:48,804] WARN Property log.cleanup.interval.mins is not 
> valid (kafka.utils.VerifiableProperties)
> [2015-09-18 10:28:48,804] INFO Property log.dirs is overridden to 
> /mnt/21e57274-0ba1-41e0-9ea4-befef32ec62c/data/kafka,/mnt/f81fc52a-7784-47c3-8b6d-be95cc935697/data/kafka,/mnt/75928c58-289d-4954-8640-0b82a990b3bc/data/kafka,/mnt/6bcb0c8e-da33-43e6-b878-703600f622d1/data/kafka,/mnt/afd7361c-dc16-4b78-8608-adda119559d0/data/kafka,/mnt/96a93811-6d91-41aa-ab1a-3804f43a5b05/data/kafka,/mnt/c085ee04-65ff-493d-b236-fa3dd462595c/data/kafka,/mnt/fbec6169-ca70-4f07-a7e9-af85424a97c5/data/kafka,/mnt/363d4473-531f-446c-b483-49593587f965/data/kafka,/mnt/d47fd346-1b77-41a8-a0a8-1c517c394868/data/kafka,/mnt/2c97da3c-4d5d-49bf-ae5a-a46c2839b18a/data/kafka,/mnt/fc6fd370-a213-4b02-9a7c-07f7a5ad505f/data/kafka
>  (kafka.utils.VerifiableProperties)
> [2015-09-18 10:28:48,804] INFO Property log.retention.hours is overridden to 
> 168 (kafka.utils.VerifiableProperties)
> [2015-09-18 10:28:48,804] INFO Property log.segment.bytes is overridden to 
> 536870912 (kafka.utils.VerifiableProperties)
> [2015-09-18 10:28:48,805] INFO Property message.max.bytes is overridden to 
> 41943040 (kafka.utils.VerifiableProperties)
> [2015-09-18 10:28:48,805] INFO Property num.io.threads is overridden to 45 
> (kafka.utils.VerifiableProperties)
> [2015-09-18 10:28:48,805] INFO Property num.network.threads is overridden to 
> 150 (kafka.utils.VerifiableProperties)
> [2015-09-18 10:28:48,805] INFO Property num.partitions is overridden to 1 
> (kafka.utils.VerifiableProperties)
> [2015-09-18 10:28:48,806] INFO Property num.replica.fetchers is overridden to 
> 10 (kafka.utils.VerifiableProperties)
> [2015-09-18 10:28:48,806] INFO Property port is overridden to 9093 
> (kafka.utils.VerifiableProperties)
> [2015-09-18 10:28:48,806] INFO Property replica.fetch.max.bytes is overridden 
> to 41943040 (kafka.utils.VerifiableProperties)
> [2015-09-18 10:28:48,806] INFO Property socket.receive.buffer.bytes is 
> overridden to 1048576 (kafka.utils.VerifiableProperties)
> [2015-09-18 10:28:48,806] INFO Property socket.request.max.bytes is 
> overridden to 104857600 

[jira] [Resolved] (KAFKA-3293) Consumers are not able to get messages.

2018-05-29 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3293.
--
Resolution: Cannot Reproduce

 Closing inactive issue. Please reopen if the issue still exists in newer 
versions.

> Consumers are not able to get messages.
> ---
>
> Key: KAFKA-3293
> URL: https://issues.apache.org/jira/browse/KAFKA-3293
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, offset manager
>Affects Versions: 0.9.0.1
> Environment: kafka: kafka_2.11-0.9.0.1
> java: jdk1.8.0_65
> OS: Linux stephen-T450s 3.19.0-51-generic #57~14.04.1-Ubuntu SMP Fri Feb 19 
> 14:36:55 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Stephen Wong
>Assignee: Neha Narkhede
>Priority: Major
>
> Overview
> ===
> The results of test are not consistent.
> The problem is that something is preventing the consumer from receiving the 
> messages.
> Configuration
> ==
> Server (only num.partitions is changed)
> diff config/server.properties config.backup/server.properties
> 65c65
> < num.partitions=8
> ---
> > num.partitions=1
> Producer
> properties.put("bootstrap.servers", “localhost:9092”);
> properties.put("acks", "all");
> properties.put("key.serializer", 
> "org.apache.kafka.common.serialization.LongSerializer");
> properties.put("value.serializer", 
> "org.apache.kafka.common.serialization.StringSerializer");
> properties.put("partitioner.class", 
> "kafkatest.sample2.SimplePartitioner");
> Consumer
> properties.put("bootstrap.servers", “localhost:9092”);
> properties.put("group.id", "testGroup");
> properties.put("key.deserializer", 
> "org.apache.kafka.common.serialization.LongDeserializer");
> properties.put("value.deserializer", 
> "org.apache.kafka.common.serialization.StringDeserializer");
> properties.put("enable.auto.commit", "false");
> Steps to reproduce:
> ===
> 1. started the zookeeper
> 2. started the kafka server
> 3. created topic
> $ bin/kafka-topics.sh --zookeeper localhost:2181 --create 
> --replication-factor 1 --partition 8 --topic testTopic4
> 4. Ran SimpleProducerDriver with 5 producers, and the amount of messages 
> produced is 50
> 5. Offset Status
> $ bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list 
> localhost:9092 --topic testTopic4 --time -1
> testTopic4:2:1
> testTopic4:5:27
> testTopic4:4:1
> testTopic4:7:2
> testTopic4:1:8
> testTopic4:3:0
> testTopic4:6:11
> testTopic4:0:0
> 6. waited till the producer driver completes, it takes no more than a few 
> seconds
> 7. ran the SimpleConsumerDriver a couple of times, and no message is 
> received. Following DEBUG information is found:
> 2016-02-25 22:42:19 DEBUG [pool-1-thread-2] Fetcher: - Ignoring fetched 
> records for partition testTopic4-3 since it is no longer fetchable
> 8. altered the properties of consumer, had the auto commit disabled:
> //properties.put("enable.auto.commit", "false");
> 9. ran the SimpleConsumerDriver a couple of times, still, no message is 
> received.
> Following DEBUG information is found:
> 2016-02-25 22:47:23 DEBUG [pool-1-thread-2] ConsumerCoordinator: - Committed 
> offset 8 for partition testTopic4-1
> seems like the offset was updated?
> 10. re-enabled the auto commit, nothing changed.
> Following DEBUG information is found:
> 2016-02-25 22:49:38 DEBUG [pool-1-thread-7] Fetcher: - Resetting offset for 
> partition testTopic4-6 to the committed offset 11
> 11. ran the SimpleProducerDriver again, another 50 messages are published
> 12. ran the SimpleConsumerDriver again, 100 messages were consumed.
> 13. ran the SimpleConsumerDriver again, 50 messages were consumed.
> As auto commit is disabled, all messages (100) should be consumed.
> The results of test are not consistent.
> The problem is that something is preventing the consumer from receiving the 
> messages.
> And sometimes it required running the producer when the consumers are still 
> active so as to get around it.
> And once the consumers started to consume messages, the problem did not occur 
> any more.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-6883) KafkaShortnamer should allow to convert Kerberos principal name to upper case user name

2018-06-04 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-6883:


Assignee: Manikumar

> KafkaShortnamer should allow to convert Kerberos principal name to upper case 
> user name
> ---
>
> Key: KAFKA-6883
> URL: https://issues.apache.org/jira/browse/KAFKA-6883
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Attila Sasvari
>Assignee: Manikumar
>Priority: Major
>
> KAFKA-5764 implemented support to convert Kerberos principal name to lower 
> case Linux user name via auth_to_local rules. 
> As a follow-up, KafkaShortnamer could be further extended to allow converting 
> principal names to uppercase by appending /U to the rule.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6959) Any impact we foresee if we upgrade Linux version or move to VM instead of physical Linux server

2018-06-01 Thread Manikumar (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16497840#comment-16497840
 ] 

Manikumar commented on KAFKA-6959:
--

Post these kind of queries to us...@kafka.apache.org mailing list 
(http://kafka.apache.org/contact) for better visibility and quicker responses.

> Any impact we foresee if we upgrade Linux version or move to VM instead of 
> physical Linux server
> 
>
> Key: KAFKA-6959
> URL: https://issues.apache.org/jira/browse/KAFKA-6959
> Project: Kafka
>  Issue Type: Task
>  Components: admin
>Affects Versions: 0.11.0.2
> Environment: Prod
>Reporter: Gene Yi
>Priority: Trivial
>  Labels: patch, performance, security
>
> As we know that the recent issue on the Liunx Meltdown and Spectre. all the 
> Linux servers need to deploy the patch and the OS version at least to be 6.9. 
> we want to know the impact to Kafka, is there any side effect if we directly 
> upgrade the OS to 7.0,  also is there any limitation if we deploy Kafka to VM 
> instead of the physical servers?
> currently the Kafka version we used is 0.11.0.2



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-3743) kafka-server-start.sh: Unhelpful error message

2018-06-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3743.
--
Resolution: Duplicate

> kafka-server-start.sh: Unhelpful error message
> --
>
> Key: KAFKA-3743
> URL: https://issues.apache.org/jira/browse/KAFKA-3743
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.10.0.0
>Reporter: Magnus Edenhill
>Priority: Minor
>
> When trying to start Kafka from an uncompiled source tarball rather than the 
> binary the kafka-server-start.sh command gives a mystical error message:
> ```
> $ bin/kafka-server-start.sh config/server.properties 
> Error: Could not find or load main class config.server.properties
> ```
> This could probably be improved to say something closer to the truth.
> This is on 0.10.0.0-rc6 tarball from github.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6335) SimpleAclAuthorizerTest#testHighConcurrencyModificationOfResourceAcls fails intermittently

2018-06-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-6335:
-
Fix Version/s: (was: 2.0.0)

> SimpleAclAuthorizerTest#testHighConcurrencyModificationOfResourceAcls fails 
> intermittently
> --
>
> Key: KAFKA-6335
> URL: https://issues.apache.org/jira/browse/KAFKA-6335
> Project: Kafka
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Manikumar
>Priority: Major
>
> From 
> https://builds.apache.org/job/kafka-pr-jdk9-scala2.12/3045/testReport/junit/kafka.security.auth/SimpleAclAuthorizerTest/testHighConcurrencyModificationOfResourceAcls/
>  :
> {code}
> java.lang.AssertionError: expected acls Set(User:36 has Allow permission for 
> operations: Read from hosts: *, User:7 has Allow permission for operations: 
> Read from hosts: *, User:21 has Allow permission for operations: Read from 
> hosts: *, User:39 has Allow permission for operations: Read from hosts: *, 
> User:43 has Allow permission for operations: Read from hosts: *, User:3 has 
> Allow permission for operations: Read from hosts: *, User:35 has Allow 
> permission for operations: Read from hosts: *, User:15 has Allow permission 
> for operations: Read from hosts: *, User:16 has Allow permission for 
> operations: Read from hosts: *, User:22 has Allow permission for operations: 
> Read from hosts: *, User:26 has Allow permission for operations: Read from 
> hosts: *, User:11 has Allow permission for operations: Read from hosts: *, 
> User:38 has Allow permission for operations: Read from hosts: *, User:8 has 
> Allow permission for operations: Read from hosts: *, User:28 has Allow 
> permission for operations: Read from hosts: *, User:32 has Allow permission 
> for operations: Read from hosts: *, User:25 has Allow permission for 
> operations: Read from hosts: *, User:41 has Allow permission for operations: 
> Read from hosts: *, User:44 has Allow permission for operations: Read from 
> hosts: *, User:48 has Allow permission for operations: Read from hosts: *, 
> User:2 has Allow permission for operations: Read from hosts: *, User:9 has 
> Allow permission for operations: Read from hosts: *, User:14 has Allow 
> permission for operations: Read from hosts: *, User:46 has Allow permission 
> for operations: Read from hosts: *, User:13 has Allow permission for 
> operations: Read from hosts: *, User:5 has Allow permission for operations: 
> Read from hosts: *, User:29 has Allow permission for operations: Read from 
> hosts: *, User:45 has Allow permission for operations: Read from hosts: *, 
> User:6 has Allow permission for operations: Read from hosts: *, User:37 has 
> Allow permission for operations: Read from hosts: *, User:23 has Allow 
> permission for operations: Read from hosts: *, User:19 has Allow permission 
> for operations: Read from hosts: *, User:24 has Allow permission for 
> operations: Read from hosts: *, User:17 has Allow permission for operations: 
> Read from hosts: *, User:34 has Allow permission for operations: Read from 
> hosts: *, User:12 has Allow permission for operations: Read from hosts: *, 
> User:42 has Allow permission for operations: Read from hosts: *, User:4 has 
> Allow permission for operations: Read from hosts: *, User:47 has Allow 
> permission for operations: Read from hosts: *, User:18 has Allow permission 
> for operations: Read from hosts: *, User:31 has Allow permission for 
> operations: Read from hosts: *, User:49 has Allow permission for operations: 
> Read from hosts: *, User:33 has Allow permission for operations: Read from 
> hosts: *, User:1 has Allow permission for operations: Read from hosts: *, 
> User:27 has Allow permission for operations: Read from hosts: *) but got 
> Set(User:36 has Allow permission for operations: Read from hosts: *, User:7 
> has Allow permission for operations: Read from hosts: *, User:21 has Allow 
> permission for operations: Read from hosts: *, User:39 has Allow permission 
> for operations: Read from hosts: *, User:43 has Allow permission for 
> operations: Read from hosts: *, User:3 has Allow permission for operations: 
> Read from hosts: *, User:35 has Allow permission for operations: Read from 
> hosts: *, User:15 has Allow permission for operations: Read from hosts: *, 
> User:16 has Allow permission for operations: Read from hosts: *, User:22 has 
> Allow permission for operations: Read from hosts: *, User:26 has Allow 
> permission for operations: Read from hosts: *, User:11 has Allow permission 
> for operations: Read from hosts: *, User:38 has Allow permission for 
> operations: Read from hosts: *, User:8 has Allow permission for operations: 
> Read from hosts: *, User:28 has Allow permission for operations: Read from 
> hosts: *, User:32 has Allow permission 

[jira] [Commented] (KAFKA-6984) Question

2018-06-02 Thread Manikumar (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16498933#comment-16498933
 ] 

Manikumar commented on KAFKA-6984:
--

You need to install gradle and need to bootstrap the gradle mapper

check here: [https://github.com/apache/kafka/blob/trunk/README.md]

or you can download binary distribution: http://kafka.apache.org/downloads

 

 

> Question
> 
>
> Key: KAFKA-6984
> URL: https://issues.apache.org/jira/browse/KAFKA-6984
> Project: Kafka
>  Issue Type: Bug
>Reporter: Remil
>Priority: Major
>
> hadoopuser@sherin-VirtualBox:~$ sudo su -p - zookeeper -c 
> "/usr/local/zookeeper/zookeeper-3.4.12/bin/zkServer.sh start" ZooKeeper JMX 
> enabled by default
> ZooKeeper JMX enabled by default
> Using config: /usr/local/zookeeper/zookeeper-3.4.12/bin/../conf/zoo.cfg
> Starting zookeeper ... STARTED
> hadoopuser@sherin-VirtualBox:~$ sudo 
> /opt/kafka/kafka-1.1.0-src/bin/kafka-server-start.sh 
> /opt/kafka/kafka-1.1.0-src/config/server.properties
> Classpath is empty. Please build the project first e.g. by running './gradlew 
> jar -PscalaVersion=2.11.12'
> hadoopuser@sherin-VirtualBox:~$ ./gradlew jar -PscalaVersion=2.11.12
> bash: ./gradlew: No such file or directory
> hadoopuser@sherin-VirtualBox:~$ sudo updatedb
> hadoopuser@sherin-VirtualBox:~$ locate gradlew
> hadoopuser@sherin-VirtualBox:~$ ./gradlew jar -PscalaVersion=2.11.12
> bash: ./gradlew: No such file or directory
> hadoopuser@sherin-VirtualBox:~$



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6984) Question

2018-06-02 Thread Manikumar (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16498975#comment-16498975
 ] 

Manikumar commented on KAFKA-6984:
--

Take a look at quickstart docs: http://kafka.apache.org/quickstart

> Question
> 
>
> Key: KAFKA-6984
> URL: https://issues.apache.org/jira/browse/KAFKA-6984
> Project: Kafka
>  Issue Type: Bug
>Reporter: Remil
>Priority: Major
>
> hadoopuser@sherin-VirtualBox:~$ sudo su -p - zookeeper -c 
> "/usr/local/zookeeper/zookeeper-3.4.12/bin/zkServer.sh start" ZooKeeper JMX 
> enabled by default
> ZooKeeper JMX enabled by default
> Using config: /usr/local/zookeeper/zookeeper-3.4.12/bin/../conf/zoo.cfg
> Starting zookeeper ... STARTED
> hadoopuser@sherin-VirtualBox:~$ sudo 
> /opt/kafka/kafka-1.1.0-src/bin/kafka-server-start.sh 
> /opt/kafka/kafka-1.1.0-src/config/server.properties
> Classpath is empty. Please build the project first e.g. by running './gradlew 
> jar -PscalaVersion=2.11.12'
> hadoopuser@sherin-VirtualBox:~$ ./gradlew jar -PscalaVersion=2.11.12
> bash: ./gradlew: No such file or directory
> hadoopuser@sherin-VirtualBox:~$ sudo updatedb
> hadoopuser@sherin-VirtualBox:~$ locate gradlew
> hadoopuser@sherin-VirtualBox:~$ ./gradlew jar -PscalaVersion=2.11.12
> bash: ./gradlew: No such file or directory
> hadoopuser@sherin-VirtualBox:~$



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6982) java.lang.ArithmeticException: / by zero

2018-06-02 Thread Manikumar (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16498962#comment-16498962
 ] 

Manikumar commented on KAFKA-6982:
--

This race condition is handled in latest code by adding processors before 
starting Acceptor Thread.

https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/network/SocketServer.scala#L152

> java.lang.ArithmeticException: / by zero
> 
>
> Key: KAFKA-6982
> URL: https://issues.apache.org/jira/browse/KAFKA-6982
> Project: Kafka
>  Issue Type: Bug
>  Components: network
>Affects Versions: 1.1.0
> Environment: Environment: Windows 10. 
>Reporter: wade wu
>Priority: Major
>
> Producer keeps sending messages to Kafka, Kafka is down. 
> Server.log shows: 
> ..
> [2018-06-01 17:01:33,945] WARN [Log partition=__consumer_offsets-6, 
> dir=D:\data\Kafka\kafka-logs] Found a corrupted index file corresponding to 
> log file 
> D:\data\Kafka\kafka-logs__consumer_offsets-6\.log due to 
> Corrupt index found, index file 
> (D:\data\Kafka\kafka-logs__consumer_offsets-6\.index) has 
> non-zero size but the last offset is 0 which is no greater than the base 
> offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
>  [2018-06-01 17:01:33,945] WARN [Log partition=__consumer_offsets-6, 
> dir=D:\data\Kafka\kafka-logs] Found a corrupted index file corresponding to 
> log file 
> D:\data\Kafka\kafka-logs__consumer_offsets-6\.log due to 
> Corrupt index found, index file 
> (D:\data\Kafka\kafka-logs__consumer_offsets-6\.index) has 
> non-zero size but the last offset is 0 which is no greater than the base 
> offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
>  [2018-06-01 17:01:34,664] ERROR Error while accepting connection 
> (kafka.network.Acceptor)
>  java.lang.ArithmeticException: / by zero
>  at kafka.network.Acceptor.run(SocketServer.scala:354)
>  at java.lang.Thread.run(Thread.java:748)
>  [2018-06-01 17:01:34,664] ERROR Error while accepting connection 
> (kafka.network.Acceptor)
>  java.lang.ArithmeticException: / by zero
>  at kafka.network.Acceptor.run(SocketServer.scala:354)
>  at java.lang.Thread.run(Thread.java:748)
>  [2018-06-01 17:01:34,664] ERROR Error while accepting connection 
> (kafka.network.Acceptor)
>  java.lang.ArithmeticException: / by zero
>  at kafka.network.Acceptor.run(SocketServer.scala:354)
>  at java.lang.Thread.run(Thread.java:748)
> ..
>  
> This line of code in SocketServer.scala causing the error: 
>                   {color:#33} currentProcessor = 
> currentProcessor{color:#d04437} % processors.size{color}{color}
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6309) add support for getting topic defaults from AdminClient

2018-06-01 Thread Manikumar (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16498033#comment-16498033
 ] 

Manikumar commented on KAFKA-6309:
--

This functionality is supported in KafkaAdminClient.describeConfigs() API.

We can call _"describeConfigs(topicResource, new 
DescribeConfigsOptions().includeSynonyms(true))"_ to list all the configured 
values and the precedence used to obtain the currently configured value.

more details: 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-226+-+Dynamic+Broker+Configuration]

> add support for getting topic defaults from AdminClient
> ---
>
> Key: KAFKA-6309
> URL: https://issues.apache.org/jira/browse/KAFKA-6309
> Project: Kafka
>  Issue Type: Improvement
>Reporter: dan norwood
>Assignee: dan norwood
>Priority: Major
>
> kip here: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-234%3A+add+support+for+getting+topic+defaults+from+AdminClient



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6581) ConsumerGroupCommand hangs if even one of the partition is unavailable

2018-06-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-6581.
--
   Resolution: Fixed
Fix Version/s: (was: 0.10.0.2)
   2.0.0

This is addressed by KIP-266.

> ConsumerGroupCommand hangs if even one of the partition is unavailable
> --
>
> Key: KAFKA-6581
> URL: https://issues.apache.org/jira/browse/KAFKA-6581
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, core, tools
>Affects Versions: 0.10.0.0
>Reporter: Sahil Aggarwal
>Priority: Minor
> Fix For: 2.0.0
>
>
> ConsumerGroupCommand.scala uses consumer internally to get the position for 
> each partition but if the partition is unavailable the call 
> consumer.position(topicPartition) will block indefinitely.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-3899) Consumer.poll() stuck in loop if wrong credentials are supplied

2018-06-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3899.
--
   Resolution: Fixed
 Assignee: (was: Edoardo Comar)
Fix Version/s: 2.0.0

This is addressed by KIP-266. 

> Consumer.poll() stuck in loop if wrong credentials are supplied
> ---
>
> Key: KAFKA-3899
> URL: https://issues.apache.org/jira/browse/KAFKA-3899
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.10.0.0, 0.10.1.0
>Reporter: Edoardo Comar
>Priority: Major
> Fix For: 2.0.0
>
>
> With the broker configured to use SASL PLAIN ,
> if the client is supplying wrong credentials, 
> a consumer calling poll()
> is stuck forever and only inspection of DEBUG-level logging can tell what is 
> wrong.
> [2016-06-24 12:15:16,455] DEBUG Connection with localhost/127.0.0.1 
> disconnected (org.apache.kafka.common.network.Selector)
> java.io.EOFException
>   at 
> org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:83)
>   at 
> org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71)
>   at 
> org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.receiveResponseOrToken(SaslClientAuthenticator.java:239)
>   at 
> org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.authenticate(SaslClientAuthenticator.java:182)
>   at 
> org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:64)
>   at 
> org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:318)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:283)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:260)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:360)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:224)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:192)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.awaitMetadataUpdate(ConsumerNetworkClient.java:134)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:183)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:973)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:937)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-3177) Kafka consumer can hang when position() is called on a non-existing partition.

2018-06-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3177.
--
Resolution: Fixed

This is addressed by KIP-266.

> Kafka consumer can hang when position() is called on a non-existing partition.
> --
>
> Key: KAFKA-3177
> URL: https://issues.apache.org/jira/browse/KAFKA-3177
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Jiangjie Qin
>Assignee: Jason Gustafson
>Priority: Critical
> Fix For: 2.0.0
>
>
> This can be easily reproduced as following:
> {code}
> {
> ...
> consumer.assign(SomeNonExsitingTopicParition);
> consumer.position();
> ...
> }
> {code}
> It seems when position is called we will try to do the following:
> 1. Fetch committed offsets.
> 2. If there is no committed offsets, try to reset offset using reset 
> strategy. in sendListOffsetRequest(), if the consumer does not know the 
> TopicPartition, it will refresh its metadata and retry. In this case, because 
> the partition does not exist, we fall in to the infinite loop of refreshing 
> topic metadata.
> Another orthogonal issue is that if the topic in the above code piece does 
> not exist, position() call will actually create the topic due to the fact 
> that currently topic metadata request could automatically create the topic. 
> This is a known separate issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-3727) Consumer.poll() stuck in loop on non-existent topic manually assigned

2018-06-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3727.
--
Resolution: Fixed

This is addressed by KIP-266.

> Consumer.poll() stuck in loop on non-existent topic manually assigned
> -
>
> Key: KAFKA-3727
> URL: https://issues.apache.org/jira/browse/KAFKA-3727
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: Edoardo Comar
>Assignee: Edoardo Comar
>Priority: Critical
>
> The behavior of a consumer on poll() for a non-existing topic is surprisingly 
> different/inconsistent 
> between a consumer that subscribed to the topic and one that had the 
> topic-partition manually assigned.
> The "subscribed" consumer will return an empty collection
> The "assigned" consumer will *loop forever* - this feels a bug to me.
> sample snippet to reproduce:
> {quote}
> KafkaConsumer assignKc = new KafkaConsumer<>(props1);
> KafkaConsumer subsKc = new KafkaConsumer<>(props2);
> List tps = new ArrayList<>();
> tps.add(new TopicPartition("topic-not-exists", 0));
> assignKc.assign(tps);
> subsKc.subscribe(Arrays.asList("topic-not-exists"));
> System.out.println("* subscribe k consumer ");
> ConsumerRecords crs2 = subsKc.poll(1000L); 
> print("subscribeKc", crs2); // returns empty
> System.out.println("* assign k consumer ");
> ConsumerRecords crs1 = assignKc.poll(1000L); 
>// will loop forever ! 
> print("assignKc", crs1);
> {quote}
> the logs for the "assigned" consumer show:
> [2016-05-18 17:33:09,907] DEBUG Updated cluster metadata version 8 to 
> Cluster(nodes = [192.168.10.18:9093 (id: 0 rack: null)], partitions = []) 
> (org.apache.kafka.clients.Metadata)
> [2016-05-18 17:33:09,908] DEBUG Partition topic-not-exists-0 is unknown for 
> fetching offset, wait for metadata refresh 
> (org.apache.kafka.clients.consumer.internals.Fetcher)
> [2016-05-18 17:33:10,010] DEBUG Sending metadata request 
> {topics=[topic-not-exists]} to node 0 (org.apache.kafka.clients.NetworkClient)
> [2016-05-18 17:33:10,011] WARN Error while fetching metadata with correlation 
> id 9 : {topic-not-exists=UNKNOWN_TOPIC_OR_PARTITION} 
> (org.apache.kafka.clients.NetworkClient)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-3503) Throw exception on missing/non-existent partition

2018-06-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3503.
--
Resolution: Duplicate

> Throw exception on missing/non-existent  partition 
> ---
>
> Key: KAFKA-3503
> URL: https://issues.apache.org/jira/browse/KAFKA-3503
> Project: Kafka
>  Issue Type: Wish
>Affects Versions: 0.9.0.1
> Environment: Java 1.8.0_60. 
> Linux  centos65vm 2.6.32-573.el6.x86_64 #1 SMP Thu Jul 23 15:44:03 UTC
>Reporter: Navin Markandeya
>Priority: Minor
>
> I would expect some exception to be thrown when a consumer tries to access a 
> non-existent partition. I did not see anyone reporting it. If is already 
> known, please link and close this.
> {code}
> java version "1.8.0_60"
> Java(TM) SE Runtime Environment (build 1.8.0_60-b27)
> Java HotSpot(TM) 64-Bit Server VM (build 25.60-b23, mixed mode)
> {code}
> {code}
> Linux centos65vm 2.6.32-573.el6.x86_64 #1 SMP Thu Jul 23 15:44:03 UTC 2015 
> x86_64 x86_64 x86_64 GNU/Linux
> {code}
> {{Kafka release - kafka_2.11-0.9.0.1}}
> Created a topic with 3 partitions
> {code}
> bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic mytopic
> Topic:mytopic PartitionCount:3ReplicationFactor:1 Configs:
>   Topic: mytopic  Partition: 0Leader: 0   Replicas: 0 Isr: 0
>   Topic: mytopic  Partition: 1Leader: 0   Replicas: 0 Isr: 0
>   Topic: mytopic  Partition: 2Leader: 0   Replicas: 0 Isr: 0
> {code}
> Consumer application does not terminate. A thrown exception that there is no 
> such {{mytopic-3}} partition, that would help to gracefully terminate it.
> {code}
> 14:08:02.885 [main] DEBUG o.a.k.c.c.i.ConsumerCoordinator - Fetching 
> committed offsets for partitions: [mytopic-3, mytopic-0, mytopic-1, mytopic-2]
> 14:08:02.887 [main] DEBUG o.a.kafka.common.metrics.Metrics - Added sensor 
> with name node-2147483647.bytes-sent
> 14:08:02.888 [main] DEBUG o.a.kafka.common.metrics.Metrics - Added sensor 
> with name node-2147483647.bytes-received
> 14:08:02.888 [main] DEBUG o.a.kafka.common.metrics.Metrics - Added sensor 
> with name node-2147483647.latency
> 14:08:02.888 [main] DEBUG o.apache.kafka.clients.NetworkClient - Completed 
> connection to node 2147483647
> 14:08:02.891 [main] DEBUG o.a.k.c.c.i.ConsumerCoordinator - No committed 
> offset for partition mytopic-3
> 14:08:02.891 [main] DEBUG o.a.k.c.consumer.internals.Fetcher - Resetting 
> offset for partition mytopic-3 to latest offset.
> 14:08:02.892 [main] DEBUG o.a.k.c.consumer.internals.Fetcher - Partition 
> mytopic-3 is unknown for fetching offset, wait for metadata refresh
> 14:08:02.965 [main] DEBUG o.apache.kafka.clients.NetworkClient - Sending 
> metadata request ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=4,client_id=consumer-2},
>  body={topics=[mytopic]}), isInitiatedByNetworkClient, 
> createdTimeMs=1459804082965, sendTimeMs=0) to node 0
> 14:08:02.968 [main] DEBUG org.apache.kafka.clients.Metadata - Updated cluster 
> metadata version 3 to Cluster(nodes = [Node(0, centos65vm, 9092)], partitions 
> = [Partition(topic = mytopic, partition = 0, leader = 0, replicas = [0,], isr 
> = [0,], Partition(topic = mytopic, partition = 1, leader = 0, replicas = 
> [0,], isr = [0,], Partition(topic = mytopic, partition = 2, leader = 0, 
> replicas = [0,], isr = [0,]])
> 14:08:02.968 [main] DEBUG o.a.k.c.consumer.internals.Fetcher - Partition 
> mytopic-3 is unknown for fetching offset, wait for metadata refresh
> 14:08:03.071 [main] DEBUG o.apache.kafka.clients.NetworkClient - Sending 
> metadata request ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=5,client_id=consumer-2},
>  body={topics=[mytopic]}), isInitiatedByNetworkClient, 
> createdTimeMs=1459804083071, sendTimeMs=0) to node 0
> 14:08:03.073 [main] DEBUG org.apache.kafka.clients.Metadata - Updated cluster 
> metadata version 4 to Cluster(nodes = [Node(0, centos65vm, 9092)], partitions 
> = [Partition(topic = mytopic, partition = 0, leader = 0, replicas = [0,], isr 
> = [0,], Partition(topic = mytopic, partition = 1, leader = 0, replicas = 
> [0,], isr = [0,], Partition(topic = mytopic, partition = 2, leader = 0, 
> replicas = [0,], isr = [0,]])
> 14:08:03.073 [main] DEBUG o.a.k.c.consumer.internals.Fetcher - Partition 
> mytopic-3 is unknown for fetching offset, wait for metadata refresh
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6982) java.lang.ArithmeticException: / by zero

2018-06-05 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-6982.
--
   Resolution: Fixed
Fix Version/s: 2.0.0

> java.lang.ArithmeticException: / by zero
> 
>
> Key: KAFKA-6982
> URL: https://issues.apache.org/jira/browse/KAFKA-6982
> Project: Kafka
>  Issue Type: Bug
>  Components: network
>Affects Versions: 1.1.0
> Environment: Environment: Windows 10. 
>Reporter: wade wu
>Priority: Major
> Fix For: 2.0.0
>
>
> Producer keeps sending messages to Kafka, Kafka is down. 
> Server.log shows: 
> ..
> [2018-06-01 17:01:33,945] WARN [Log partition=__consumer_offsets-6, 
> dir=D:\data\Kafka\kafka-logs] Found a corrupted index file corresponding to 
> log file 
> D:\data\Kafka\kafka-logs__consumer_offsets-6\.log due to 
> Corrupt index found, index file 
> (D:\data\Kafka\kafka-logs__consumer_offsets-6\.index) has 
> non-zero size but the last offset is 0 which is no greater than the base 
> offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
>  [2018-06-01 17:01:33,945] WARN [Log partition=__consumer_offsets-6, 
> dir=D:\data\Kafka\kafka-logs] Found a corrupted index file corresponding to 
> log file 
> D:\data\Kafka\kafka-logs__consumer_offsets-6\.log due to 
> Corrupt index found, index file 
> (D:\data\Kafka\kafka-logs__consumer_offsets-6\.index) has 
> non-zero size but the last offset is 0 which is no greater than the base 
> offset 0.}, recovering segment and rebuilding index files... (kafka.log.Log)
>  [2018-06-01 17:01:34,664] ERROR Error while accepting connection 
> (kafka.network.Acceptor)
>  java.lang.ArithmeticException: / by zero
>  at kafka.network.Acceptor.run(SocketServer.scala:354)
>  at java.lang.Thread.run(Thread.java:748)
>  [2018-06-01 17:01:34,664] ERROR Error while accepting connection 
> (kafka.network.Acceptor)
>  java.lang.ArithmeticException: / by zero
>  at kafka.network.Acceptor.run(SocketServer.scala:354)
>  at java.lang.Thread.run(Thread.java:748)
>  [2018-06-01 17:01:34,664] ERROR Error while accepting connection 
> (kafka.network.Acceptor)
>  java.lang.ArithmeticException: / by zero
>  at kafka.network.Acceptor.run(SocketServer.scala:354)
>  at java.lang.Thread.run(Thread.java:748)
> ..
>  
> This line of code in SocketServer.scala causing the error: 
>                   {color:#33} currentProcessor = 
> currentProcessor{color:#d04437} % processors.size{color}{color}
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-5940) kafka-delete-records.sh doesn't give any feedback when the JSON offset configuration file is invalid

2018-06-05 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5940.
--
Resolution: Fixed

Fixed in KAFKA-5919

> kafka-delete-records.sh doesn't give any feedback when the JSON offset 
> configuration file is invalid
> 
>
> Key: KAFKA-5940
> URL: https://issues.apache.org/jira/browse/KAFKA-5940
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Jakub Scholz
>Assignee: Jakub Scholz
>Priority: Major
>
> When deleting records using {{bin/kafka-delete-records.sh}}, the user has to 
> pass a JSON file with the list of topics/partitions and the offset to which 
> the records should be deleted. However, currently when such file is invalid 
> the utility doesn't print any visible error:
> {code}
> $ bin/kafka-delete-records.sh --bootstrap-server localhost:9092 
> --offset-json-file offset.json
> Executing records delete operation
> Records delete operation completed:
> $
> {code}
> Instead, I would suggest that it throws an exception to make it clear that 
> the problem is the invalid JSON file:
> {code}
> $ bin/kafka-delete-records.sh --bootstrap-server localhost:9092 
> --offset-json-file offset.json
> Exception in thread "main" kafka.common.AdminCommandFailedException: Offset 
> json file doesn't contain valid JSON data.
>   at 
> kafka.admin.DeleteRecordsCommand$.parseOffsetJsonStringWithoutDedup(DeleteRecordsCommand.scala:54)
>   at 
> kafka.admin.DeleteRecordsCommand$.execute(DeleteRecordsCommand.scala:62)
>   at kafka.admin.DeleteRecordsCommand$.main(DeleteRecordsCommand.scala:37)
>   at kafka.admin.DeleteRecordsCommand.main(DeleteRecordsCommand.scala)
> $
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6972) Kafka ACL does not work expected with wildcard

2018-06-05 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-6972.
--
Resolution: Information Provided

> Kafka ACL does not work expected with wildcard
> --
>
> Key: KAFKA-6972
> URL: https://issues.apache.org/jira/browse/KAFKA-6972
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.11.0.0
> Environment: OS : CentOS 7, 64bit.
> Confluent : 3.3, Kafka 0.11.
>Reporter: Soyee Deng
>Assignee: Sönke Liebau
>Priority: Major
>
> Just started with Confluent 3.3 platform and Kafka 0.11 having SSL as 
> transportation security and Kerberos to restrict the access control based on 
> the holding principals. In order to make life easier, wildcard is extensively 
> used in my environment. But it turned out that is not working as expected. 
> My issue is that when I run the command _kafka-acls_ under one directory with 
> some files, this command would pick up the name of first file as the topic 
> name or group name. e.g. In my case, abcd.txt would be chosen while giving my 
> principal connect-consumer the permissions of consuming message from any 
> topic with any group Id.
> [quality@data-pipeline-1 test_dir]$ 
> KAFKA_OPTS=-Djava.security.auth.login.config='/etc/security/jaas/broker-jaas.conf'
>  kafka-acls --authorizer-properties 
> zookeeper.connect=data-pipeline-1.orion.com:2181 --add --allow-principal 
> User:connect-consumer --consumer --topic * --group *
>  Adding ACLs for resource `Topic:abcd.txt`:
>  User:connect-consumer has Allow permission for operations: Describe from 
> hosts: *
>  User:connect-consumer has Allow permission for operations: Read from hosts: *
> Adding ACLs for resource `Group:abcd.txt`:
>  User:connect-consumer has Allow permission for operations: Read from hosts: *
> Current ACLs for resource `Topic:abcd.txt`:
>  User:connect-consumer has Allow permission for operations: Describe from 
> hosts: *
>  User:connect-consumer has Allow permission for operations: Read from hosts: *
>  User:connect-consumer has Allow permission for operations: Write from hosts: 
> *
> Current ACLs for resource `Group:abcd.txt`:
>  User:connect-consumer has Allow permission for operations: Read from hosts: *
>  
> My current work around solution is changing command context to an empty 
> directory and run above command, it works as expected. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-5523) ReplayLogProducer not using the new Kafka consumer

2018-06-05 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5523.
--
   Resolution: Fixed
 Assignee: Manikumar
Fix Version/s: 2.0.0

> ReplayLogProducer not using the new Kafka consumer
> --
>
> Key: KAFKA-5523
> URL: https://issues.apache.org/jira/browse/KAFKA-5523
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Paolo Patierno
>Assignee: Manikumar
>Priority: Minor
> Fix For: 2.0.0
>
>
> Hi,
> the ReplayLogProducer is using the latest Kafka producer but not the latest 
> Kafka consumer. Is this tool today deprecated ? I see that something like 
> that could be done using the MirrorMaker. [~ijuma] Does it make sense to 
> update the ReplayLogProducer to the latest Kafka consumer ?
> Thanks,
> Paolo



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-5523) ReplayLogProducer not using the new Kafka consumer

2018-05-29 Thread Manikumar (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-5523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16493552#comment-16493552
 ] 

Manikumar commented on KAFKA-5523:
--

[~ijuma] If we are going to retain this tool, I can work on this.  But in my 
opinion this is not so useful tool and can be removed.

> ReplayLogProducer not using the new Kafka consumer
> --
>
> Key: KAFKA-5523
> URL: https://issues.apache.org/jira/browse/KAFKA-5523
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Paolo Patierno
>Priority: Minor
>
> Hi,
> the ReplayLogProducer is using the latest Kafka producer but not the latest 
> Kafka consumer. Is this tool today deprecated ? I see that something like 
> that could be done using the MirrorMaker. [~ijuma] Does it make sense to 
> update the ReplayLogProducer to the latest Kafka consumer ?
> Thanks,
> Paolo



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-2651) Remove deprecated config alteration from TopicCommand in 0.9.1.0

2018-05-29 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-2651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-2651:
-
Fix Version/s: (was: 2.0.0)

> Remove deprecated config alteration from TopicCommand in 0.9.1.0
> 
>
> Key: KAFKA-2651
> URL: https://issues.apache.org/jira/browse/KAFKA-2651
> Project: Kafka
>  Issue Type: Task
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Manikumar
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-5346) Kafka Producer Failure After Idle Time

2018-05-28 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5346.
--
Resolution: Not A Problem

Closing as per above comment. Please reopen if you think the issue still exists

> Kafka Producer Failure After Idle Time
> --
>
> Key: KAFKA-5346
> URL: https://issues.apache.org/jira/browse/KAFKA-5346
> Project: Kafka
>  Issue Type: Bug
> Environment: 0.9.0.1 , windows
>Reporter: Manikandan P
>Priority: Major
>  Labels: windows
>
> We are using kafka (2.11-0.9.0.1) in windows and using .NET Kafka SDK 
> (kafka-net) for connecting kafka server.
> When we produce the data to kafka server after 15 minutes of idle time of 
> .NET Client, we are getting below exception in the Kafka SDK Logs.
> TcpClient Socket [http://10.X.X.100:9092//10.X.X.211:50290] Socket.Poll(S): 
> Data was not available, may be connection was closed. 
> TcpClient Socket [http://10.X.X.100:9092//10.X.X.211:50290] has been closed 
> successfully.
> It seems that Kafka Server is accepting the socket request but not responding 
> the request due to which we are not able to produce the message to Kafka even 
> though Kafka Server is online.
> We also tried to increase the threads and also decrease the idle time in 
> server.properties as below in kafka Server and still getting above logs.
> num.network.threads=6
> num.io.threads=16
> connections.max.idle.ms =12
> Please help us to resolve the above issue as it is breaking functional flow 
> and we are having in go live next week.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6769) Upgrade jetty library version

2018-05-28 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-6769.
--
   Resolution: Fixed
Fix Version/s: 2.0.0

Jetty version upgraded to 9.4.10.v20180503 in trunk.

> Upgrade jetty library version
> -
>
> Key: KAFKA-6769
> URL: https://issues.apache.org/jira/browse/KAFKA-6769
> Project: Kafka
>  Issue Type: Task
>  Components: core, security
>Affects Versions: 1.1.0
>Reporter: Di Shang
>Priority: Critical
> Fix For: 2.0.0
>
>
> jetty 9.2 has reached end of life as of Jan 2018
> [http://www.eclipse.org/jetty/documentation/current/what-jetty-version.html#d0e203]
> Current version used in Kafka 1.1.0: 9.2.24.v20180105
> For security reason please upgrade to a later version. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-4368) Unclean shutdown breaks Kafka cluster

2018-05-28 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-4368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4368.
--
Resolution: Auto Closed

Closing inactive issue.

> Unclean shutdown breaks Kafka cluster
> -
>
> Key: KAFKA-4368
> URL: https://issues.apache.org/jira/browse/KAFKA-4368
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.9.0.1, 0.10.0.0
>Reporter: Anukool Rattana
>Priority: Critical
>
> My team has observed that if broker process die unclean then it will block 
> producer from sending messages to kafka topic.
> Here is how to reproduce the problem:
> 1) Create a Kafka 0.10 with three brokers (A, B and C). 
> 2) Create topic with replication_factor = 2 
> 3) Set producer to send messages with "acks=all" meaning all replicas must be 
> created before able to proceed next message. 
> 4) Force IEM (IBM Endpoint Manager) to send patch to broker A and force 
> server to reboot after patches installed.
> Note: min.insync.replicas = 1
> Result: - Producers are not able send messages to kafka topic after broker 
> rebooted and come back to join cluster with following error messages. 
> [2016-09-28 09:32:41,823] WARN Error while fetching metadata with correlation 
> id 0 : {logstash=LEADER_NOT_AVAILABLE} 
> (org.apache.kafka.clients.NetworkClient)
> We suspected that number of replication_factor (2) is not sufficient to our 
> kafka environment but really need an explanation on what happen when broker 
> facing unclean shutdown. 
> The same issue occurred when setting cluster with 2 brokers and 
> replication_factor = 1.
> The workaround i used to recover service is to cleanup both kafka topic log 
> file and zookeeper data (rmr /brokers/topics/XXX and rmr /consumers/XXX).
> Note:
> Topic list after A comeback from rebooted.
> Topic:logstash  PartitionCount:3ReplicationFactor:2 Configs:
> Topic: logstash Partition: 0Leader: 1   Replicas: 1,3   Isr: 
> 1,3
> Topic: logstash Partition: 1Leader: 2   Replicas: 2,1   Isr: 
> 2,1
> Topic: logstash Partition: 2Leader: 3   Replicas: 3,2   Isr: 
> 2,3



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6427) Inconsistent exception type from KafkaConsumer.position

2018-05-28 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-6427.
--
   Resolution: Fixed
Fix Version/s: 2.0.0

Fixed in https://github.com/apache/kafka/pull/5005

> Inconsistent exception type from KafkaConsumer.position
> ---
>
> Key: KAFKA-6427
> URL: https://issues.apache.org/jira/browse/KAFKA-6427
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Reporter: Jay Kahrman
>Priority: Trivial
> Fix For: 2.0.0
>
>
> If KafkaConsumer.position is called with a partition that the consumer isn't 
> assigned, it throws an IllegalArgumentException. All other APIs throw an 
> IllegalStateException when the consumer tries to act on a partition that is 
> not assigned to the consumer. 
> Looking at the implementation, if it weren't for subscription test and 
> IllegalArgumentException thrown at the beginning of KafkaConsumer.position, 
> the very next line would throw an IllegalStateException anyway.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-5905) Remove PrincipalBuilder and DefaultPrincipalBuilder

2018-05-28 Thread Manikumar (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-5905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16493037#comment-16493037
 ] 

Manikumar commented on KAFKA-5905:
--

[~hachikuji]  We have deprecated these classes in 1.0.0. Can these be removed 
be removed in 2.0.0?

> Remove PrincipalBuilder and DefaultPrincipalBuilder
> ---
>
> Key: KAFKA-5905
> URL: https://issues.apache.org/jira/browse/KAFKA-5905
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Manikumar
>Priority: Major
> Fix For: 2.0.0
>
>
> These classes were deprecated after KIP-189: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-189%3A+Improve+principal+builder+interface+and+add+support+for+SASL,
>  which is part of 1.0.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-5905) Remove PrincipalBuilder and DefaultPrincipalBuilder

2018-05-28 Thread Manikumar (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-5905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16493037#comment-16493037
 ] 

Manikumar edited comment on KAFKA-5905 at 5/29/18 2:52 AM:
---

[~hachikuji]  We have deprecated these classes in 1.0.0. Can these be removed 
in 2.0.0?


was (Author: omkreddy):
[~hachikuji]  We have deprecated these classes in 1.0.0. Can these be removed 
be removed in 2.0.0?

> Remove PrincipalBuilder and DefaultPrincipalBuilder
> ---
>
> Key: KAFKA-5905
> URL: https://issues.apache.org/jira/browse/KAFKA-5905
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Manikumar
>Priority: Major
> Fix For: 2.0.0
>
>
> These classes were deprecated after KIP-189: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-189%3A+Improve+principal+builder+interface+and+add+support+for+SASL,
>  which is part of 1.0.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-3011) Consumer.poll(0) blocks if Kafka not accessible

2018-06-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3011.
--
Resolution: Fixed

This is addressed by KIP-266. 

> Consumer.poll(0) blocks if Kafka not accessible
> ---
>
> Key: KAFKA-3011
> URL: https://issues.apache.org/jira/browse/KAFKA-3011
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
> Environment: all
>Reporter: Eric Bowman
>Priority: Major
>
> Because of this loop in ConsumerNetworkClient:
> {code:java}
> public void awaitMetadataUpdate() {
> int version = this.metadata.requestUpdate();
> do {
> poll(Long.MAX_VALUE);
> } while (this.metadata.version() == version);
> }
> {code}
> ...if Kafka is not reachable (perhaps not running, or other network issues, 
> unclear), then KafkaConsumer.poll(0) will block until it's available.
> I suspect that better behavior would be an exception



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-4571) Consumer fails to retrieve messages if started before producer

2018-06-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-4571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4571.
--
Resolution: Auto Closed

Closing inactive issue. Please reopen if the issue still exists.

> Consumer fails to retrieve messages if started before producer
> --
>
> Key: KAFKA-4571
> URL: https://issues.apache.org/jira/browse/KAFKA-4571
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.10.1.1
> Environment: Ubuntu Desktop 16.04 LTS, Oracle Java 8 1.8.0_101, Core 
> i7 4770K
>Reporter: Sergiu Hlihor
>Priority: Major
>
> In a configuration where topic was never created before, starting the 
> consumer before the producer leads to no message being consumed 
> (KafkaConsumer.pool() returns always an instance of ConsumerRecords with 0 
> count ). 
> Starting another consumer on the same group, same topic after messages were 
> produced is still not consuming them. Starting another consumer with another 
> groupId appears to be working.
> In the consumer logs I see: WARN  NetworkClient - Error while fetching 
> metadata with correlation id 1 : {measurements021=LEADER_NOT_AVAILABLE} 
> Both producer and consumer were launched from inside same JVM. 
> The configuration used is the standard one found in Kafka distribution. If 
> this is a configuration issue, please suggest any change that I should do.
> Thank you



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-5304) Kafka Producer throwing infinite NullPointerExceptions

2018-06-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5304.
--
Resolution: Auto Closed

Closing inactive issue. Please reopen if the issue still exists.

> Kafka Producer throwing infinite NullPointerExceptions
> --
>
> Key: KAFKA-5304
> URL: https://issues.apache.org/jira/browse/KAFKA-5304
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.9.0.1
> Environment: RedHat Enterprise Linux 6.8
>Reporter: Pranay Kumar Chaudhary
>Priority: Major
>
> 2017-05-22 11:38:56,918 LL="ERROR" TR="kafka-producer-network-thread | 
> application-name.hostname.com" LN="o.a.k.c.p.i.Sender"  Uncaught error in 
> kafka producer I/O thread:
> java.lang.NullPointerException: null
> Continuously getting this error in logs which is filling up the disk space. 
> Not able to get a stack trace to pinpoint the source of the error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-3822) Kafka Consumer close() hangs indefinitely if Kafka Broker shutdown while connected

2018-06-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3822.
--
Resolution: Fixed

This is addressed by KIP-266. 

> Kafka Consumer close() hangs indefinitely if Kafka Broker shutdown while 
> connected
> --
>
> Key: KAFKA-3822
> URL: https://issues.apache.org/jira/browse/KAFKA-3822
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.1, 0.10.0.0
> Environment: x86 Red Hat 6 (1 broker running zookeeper locally, 
> client running on a separate server)
>Reporter: Alexander Cook
>Assignee: Ashish Singh
>Priority: Major
>
> I am using the KafkaConsumer java client to consume messages. My application 
> shuts down smoothly if I am connected to a Kafka broker, or if I never 
> succeed at connecting to a Kafka broker, but if the broker is shut down while 
> my consumer is connected to it, consumer.close() hangs indefinitely. 
> Here is how I reproduce it: 
> 1. Start 0.9.0.1 Kafka Broker
> 2. Start consumer application and consume messages
> 3. Stop 0.9.0.1 Kafka Broker (ctrl-c or stop script)
> 4. Try to stop application...hangs at consumer.close() indefinitely. 
> I also see this same behavior using 0.10 broker and client. 
> This is my first bug reported to Kafka, so please let me know if I should be 
> following a different format. Thanks! 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-3457) KafkaConsumer.committed(...) hangs forever if port number is wrong

2018-06-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3457.
--
Resolution: Fixed

This is addressed by KIP-266. 

> KafkaConsumer.committed(...) hangs forever if port number is wrong
> --
>
> Key: KAFKA-3457
> URL: https://issues.apache.org/jira/browse/KAFKA-3457
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.1
>Reporter: Harald Kirsch
>Assignee: Liquan Pei
>Priority: Major
>
> Create a KafkaConsumer with default settings but with a wrong host:port 
> setting for bootstrap.servers. Have it in some consumer group, do not 
> subscribe or assign partitions.
> Then call .committed(...) for a topic/partition combination a few times. It 
> will hang on the 2nd or third call forever. In the debug log you will see 
> that it repeats connections all over again. I waited many minutes and it 
> never came back to throw an Exception.
> The connections problems should at least pop out on the WARNING log level. 
> Likely the connection problems should throw an exception eventually.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-4751) kafka-clients-0.9.0.2.4.2.11-1 issue not throwing exception.

2018-06-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-4751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4751.
--
Resolution: Fixed

This is addressed by KIP-266. 

> kafka-clients-0.9.0.2.4.2.11-1 issue not throwing exception.
> 
>
> Key: KAFKA-4751
> URL: https://issues.apache.org/jira/browse/KAFKA-4751
> Project: Kafka
>  Issue Type: Bug
> Environment: kafka-clients-0.9.0.2.4.2.11-1 java based client
>Reporter: Avinash Kumar Gaur
>Priority: Major
>
> While running consumer with kafka-clients-0.9.0.2.4.2.11-1.jar and connecting 
> directly with broker, kafka consumer is not throwing any exception, if broker 
> is down.
> 1)Create client with kafka-clients-0.9.0.2.4.2.11-1.jar.
> 2)Do not start kafka broker.
> 3)Start kafka consumer with required properties.
> Observation - As you may see consumer is not throwing any exception even if 
> broker is down.
> Expected - It should throw exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-5424) KafkaConsumer.listTopics() throws Exception when unauthorized topics exist in cluster

2018-06-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5424.
--
Resolution: Fixed

This has been fixed via KAFKA-3396

> KafkaConsumer.listTopics() throws Exception when unauthorized topics exist in 
> cluster
> -
>
> Key: KAFKA-5424
> URL: https://issues.apache.org/jira/browse/KAFKA-5424
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Reporter: Mike Fagan
>Assignee: Mickael Maison
>Priority: Major
>
> KafkaConsumer.listTopics() internally calls Fetcher. 
> getAllTopicMetadata(timeout) and this method will throw a 
> TopicAuthorizationException when there exists an unauthorized topic in the 
> cluster. 
> This behavior runs counter to the API docs and makes listTopics() unusable 
> except in the case of the consumer is authorized for every single topic in 
> the cluster. 
> A potentially better approach is to have Fetcher implement a new method 
> getAuthorizedTopicMetadata(timeout)  and have KafkaConsumer call this method 
> instead of getAllTopicMetadata(timeout) from within KafkaConsumer.listTopics()



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6281) Kafka JavaAPI Producer failed with NotLeaderForPartitionException

2018-06-04 Thread Manikumar (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16500540#comment-16500540
 ] 

Manikumar commented on KAFKA-6281:
--

Can you upload server logs and server.properties?

> Kafka JavaAPI Producer failed with NotLeaderForPartitionException
> -
>
> Key: KAFKA-6281
> URL: https://issues.apache.org/jira/browse/KAFKA-6281
> Project: Kafka
>  Issue Type: Bug
>Reporter: Anil
>Priority: Major
> Attachments: server1-controller.log, server2-controller.log
>
>
> We are running Kafka (vesion kafka_2.11-0.10.1.0) in a 2 node cluster. We 
> have 2 producers (Java API) acting on different topics. Each topic has single 
> partition. The topic where we had this issue, has one consumer running. This 
> set up has been running fine for 3 months, and we saw this issue. All the 
> suggested cases/solutions for this issue in other forums don't seem to apply 
> for my scenario.
> Exception at producer;
> {code}
> -2017-11-25T17:40:33,035 [kafka-producer-network-thread | producer-1] ERROR 
> client.producer.BingLogProducerCallback - Encountered exception in sending 
> message ; > org.apache.kafka.common.errors.NotLeaderForPartitionException: 
> This server is not the leader for that topic-partition.
> {code}
> We haven't enabled retries for the messages, because this is transactional 
> data and we want to maintain the order.
> Producer config:
> {code}
> bootstrap.servers : server1ip:9092
> acks :all
> retries : 0
> linger.ms :0
> buffer.memory :1024
> max.request.size :1024000
> key.serializer : org.apache.kafka.common.serialization.StringSerializer
> value.serializer : org.apache.kafka.common.serialization.StringSerializer
> {code}
> We are connecting to server1 at both producer and consumer. The controller 
> log at server2 indicates there is some shutdown happened at during sametime, 
> but I dont understand why this happened.
> {color:red}[2017-11-25 17:31:44,776] DEBUG [Controller 2]: topics not in 
> preferred replica Map() (kafka.controller.KafkaController) [2017-11-25 
> 17:31:44,776] TRACE [Controller 2]: leader imbalance ratio for broker 2 is 
> 0.00 (kafka.controller.KafkaController) [2017-11-25 17:31:44,776] DEBUG 
> [Controller 2]: topics not in preferred replica Map() 
> (kafka.controller.KafkaController) [2017-11-25 17:31:44,776] TRACE 
> [Controller 2]: leader imbalance ratio for broker 1 is 0.00 
> (kafka.controller.KafkaController) [2017-11-25 17:34:18,314] INFO 
> [SessionExpirationListener on 2], ZK expired; shut down all controller 
> components and try to re-elect 
> (kafka.controller.KafkaController$SessionExpirationListener) [2017-11-25 
> 17:34:18,317] DEBUG [Controller 2]: Controller resigning, broker id 2 
> (kafka.controller.KafkaController) [2017-11-25 17:34:18,317] DEBUG 
> [Controller 2]: De-registering IsrChangeNotificationListener 
> (kafka.controller.KafkaController) [2017-11-25 17:34:18,317] INFO 
> [delete-topics-thread-2], Shutting down 
> (kafka.controller.TopicDeletionManager$DeleteTopicsThread) [2017-11-25 
> 17:34:18,317] INFO [delete-topics-thread-2], Stopped 
> (kafka.controller.TopicDeletionManager$DeleteTopicsThread) [2017-11-25 
> 17:34:18,318] INFO [delete-topics-thread-2], Shutdown completed 
> (kafka.controller.TopicDeletionManager$DeleteTopicsThread) [2017-11-25 
> 17:34:18,318] INFO [Partition state machine on Controller 2]: Stopped 
> partition state machine (kafka.controller.PartitionStateMachine) [2017-11-25 
> 17:34:18,318] INFO [Replica state machine on controller 2]: Stopped replica 
> state machine (kafka.controller.ReplicaStateMachine) [2017-11-25 
> 17:34:18,318] INFO [Controller-2-to-broker-2-send-thread], Shutting down 
> (kafka.controller.RequestSendThread) [2017-11-25 17:34:18,318] INFO 
> [Controller-2-to-broker-2-send-thread], Stopped 
> (kafka.controller.RequestSendThread) [2017-11-25 17:34:18,319] INFO 
> [Controller-2-to-broker-2-send-thread], Shutdown completed 
> (kafka.controller.RequestSendThread) [2017-11-25 17:34:18,319] INFO 
> [Controller-2-to-broker-1-send-thread], Shutting down 
> (kafka.controller.RequestSendThread) [2017-11-25 17:34:18,319] INFO 
> [Controller-2-to-broker-1-send-thread], Stopped 
> (kafka.controller.RequestSendThread) [2017-11-25 17:34:18,319] INFO 
> [Controller-2-to-broker-1-send-thread], Shutdown completed 
> (kafka.controller.RequestSendThread) [2017-11-25 17:34:18,319] INFO 
> [Controller 2]: Broker 2 resigned as the controller 
> (kafka.controller.KafkaController) [2017-11-25 17:34:18,353] DEBUG 
> [IsrChangeNotificationListener] Fired!!! 
> (kafka.controller.IsrChangeNotificationListener) [2017-11-25 17:34:18,353] 
> DEBUG [IsrChangeNotificationListener] Fired!!! 
> (kafka.controller.IsrChangeNotificationListener) [2017-11-25 17:34:18,354] 
> INFO 

[jira] [Commented] (KAFKA-6956) Use Java AdminClient in BrokerApiVersionsCommand

2018-05-27 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16492006#comment-16492006
 ] 

Manikumar commented on KAFKA-6956:
--

Existing Jira for this task :  https://issues.apache.org/jira/browse/KAFKA-5723

> Use Java AdminClient in BrokerApiVersionsCommand
> 
>
> Key: KAFKA-6956
> URL: https://issues.apache.org/jira/browse/KAFKA-6956
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Assignee: Vahid Hashemian
>Priority: Major
>
> The Scala AdminClient was introduced as a stop gap until we had an officially 
> supported API. The Java AdminClient is the supported API so we should migrate 
> all usages to it and remove the Scala AdminClient. This JIRA is for using the 
> Java AdminClient in BrokerApiVersionsCommand. We would need to verify that 
> the necessary APIs are available via the Java AdminClient.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-4122) Consumer startup swallows DNS resolution exception and infinitely retries

2018-07-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-4122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4122.
--
Resolution: Duplicate

Fixed under KAFKA-7111.

> Consumer startup swallows DNS resolution exception and infinitely retries
> -
>
> Key: KAFKA-4122
> URL: https://issues.apache.org/jira/browse/KAFKA-4122
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer, network
>Affects Versions: 0.9.0.1
> Environment: Run from Docker image with following Dockerfile:
> {code}
> FROM java:openjdk-8-jre
> ENV DEBIAN_FRONTEND noninteractive
> ENV SCALA_VERSION 2.11
> ENV KAFKA_VERSION 0.9.0.1
> ENV KAFKA_HOME /opt/kafka_"$SCALA_VERSION"-"$KAFKA_VERSION"
> # Install Kafka, Zookeeper and other needed things
> RUN apt-get update && \
> apt-get install -y zookeeper wget supervisor dnsutils && \
> rm -rf /var/lib/apt/lists/* && \
> apt-get clean && \
> wget -q 
> http://apache.mirrors.spacedump.net/kafka/"$KAFKA_VERSION"/kafka_"$SCALA_VERSION"-"$KAFKA_VERSION".tgz
>  -O /tmp/kafka_"$SCALA_VERSION"-"$KAFKA_VERSION".tgz && \
> tar xfz /tmp/kafka_"$SCALA_VERSION"-"$KAFKA_VERSION".tgz -C /opt && \
> rm /tmp/kafka_"$SCALA_VERSION"-"$KAFKA_VERSION".tgz
> {code}
>Reporter: Shane Hender
>Priority: Major
>
> When a consumer encounters nodes that it can't resolve the IP to, I'd expect 
> it to print an ERROR level msg and bubble up an exception, especially if 
> there are no other nodes available.
> Following is the stack trace that was hidden under the DEBUG trace level:
> {code}
> 18:30:47.070 [Filters-akka.kafka.default-dispatcher-7] DEBUG 
> o.apache.kafka.clients.NetworkClient - Initialize connection to node 0 for 
> sending metadata request
> 18:30:47.070 [Filters-akka.kafka.default-dispatcher-7] DEBUG 
> o.apache.kafka.clients.NetworkClient - Initiating connection to node 0 at 
> kafka.docker:9092.
> 18:30:47.071 [Filters-akka.kafka.default-dispatcher-7] DEBUG 
> o.apache.kafka.clients.NetworkClient - Error connecting to node 0 at 
> kafka.docker:9092:
> java.io.IOException: Can't resolve address: kafka.docker:9092
>   at 
> org.apache.kafka.common.network.Selector.connect(Selector.java:156)
>   at 
> org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:489)
>   at 
> org.apache.kafka.clients.NetworkClient.access$400(NetworkClient.java:47)
>   at 
> org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater.maybeUpdate(NetworkClient.java:624)
>   at 
> org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater.maybeUpdate(NetworkClient.java:543)
>   at 
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:254)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:320)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:213)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:193)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.awaitMetadataUpdate(ConsumerNetworkClient.java:134)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorKnown(AbstractCoordinator.java:184)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:886)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:853)
>   at 
> akka.kafka.internal.ConsumerStageLogic.poll(ConsumerStage.scala:410)
>   at 
> akka.kafka.internal.CommittableConsumerStage$$anon$1.poll(ConsumerStage.scala:166)
>   at 
> akka.kafka.internal.ConsumerStageLogic$$anon$5.onPull(ConsumerStage.scala:360)
>   at 
> akka.stream.impl.fusing.GraphInterpreter.processEvent(GraphInterpreter.scala:608)
>   at 
> akka.stream.impl.fusing.GraphInterpreter.execute(GraphInterpreter.scala:542)
>   at 
> akka.stream.impl.fusing.GraphInterpreterShell.runBatch(ActorGraphInterpreter.scala:471)
>   at 
> akka.stream.impl.fusing.GraphInterpreterShell.receive(ActorGraphInterpreter.scala:414)
>   at 
> akka.stream.impl.fusing.ActorGraphInterpreter.akka$stream$impl$fusing$ActorGraphInterpreter$$processEvent(ActorGraphInterpreter.scala:603)
>   at 
> akka.stream.impl.fusing.ActorGraphInterpreter$$anonfun$receive$1.applyOrElse(ActorGraphInterpreter.scala:618)
>   at akka.actor.Actor$class.aroundReceive(Actor.scala:484)
>   at 
> 

[jira] [Commented] (KAFKA-7124) Number of AnyLogDir should match the length of the replicas list

2018-07-01 Thread Manikumar (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16529073#comment-16529073
 ] 

Manikumar commented on KAFKA-7124:
--

There is no issue with code. mentioned code generates "any" for each replica
 
The generated json file posted by OP is not generated by 
"ReassignPartitionsCommand --generate " command. 
It was generated by OP's custom code. 


> Number of AnyLogDir should match the length of the replicas list
> 
>
> Key: KAFKA-7124
> URL: https://issues.apache.org/jira/browse/KAFKA-7124
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Major
>
> See discussion under 'Partitions reassignment is failing in Kafka 1.1.0' 
> thread reported by Debraj Manna.
> Here is snippet from generated json file:
> {code}
> {"topic": "Topic3", "partition": 7, "log_dirs": ["any"], "replicas": [3, 0, 
> 2]}
> {code}
> Code snippet from ReassignPartitionsCommand.scala :
> {code}
>   "log_dirs" -> replicas.map(r => 
> replicaLogDirAssignment.getOrElse(new TopicPartitionReplica(tp.topic, 
> tp.partition, r), AnyLogDir)).asJava
> {code}
> We know that the appearance of "any" was due to the OrElse clause.
> There is a bug in the above code that the number of AnyLogDir should match 
> the length of the replicas list, or "log_dirs" should be omitted in such case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7123) Documentation: broken link http://confluent.io/docs/current/kafka-rest/docs/intro.html

2018-07-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-7123.
--
Resolution: Fixed

> Documentation: broken link 
> http://confluent.io/docs/current/kafka-rest/docs/intro.html
> --
>
> Key: KAFKA-7123
> URL: https://issues.apache.org/jira/browse/KAFKA-7123
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Vitaliy Fedoriv
>Assignee: Lee Dongjin
>Priority: Minor
>
> Broken link on page
> https://cwiki.apache.org/confluence/display/KAFKA/Clients
> for Confluent REST Proxy
> Should be
> https://docs.confluent.io/current/kafka-rest/docs/index.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-3571) Traits for utilities like ConsumerGroupCommand

2018-07-01 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3571.
--
Resolution: Won't Fix

Now AdminClient has support for most of the admin API. This can be used for 
testing tools or building various utilities.  Please reopen if you think 
otherwise.


> Traits for utilities like ConsumerGroupCommand
> --
>
> Key: KAFKA-3571
> URL: https://issues.apache.org/jira/browse/KAFKA-3571
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 0.9.0.1
>Reporter: Greg Zoller
>Assignee: Liquan Pei
>Priority: Minor
>
> I notice that several utilities like ConsumerGroupCommand are implemented 
> (hard-wired really) to be command-line utilities.  It'd be really handy for 
> testing if these were broken out as Scala traits (that don't call println) 
> with the concrete classes or objects being the command-line utility.
> As a trait I could create a thin wrapper class passing the same array of 
> arguments, and instead of producing screen output the trait could produce 
> result classes.  
> The command-line utilities (your concrete classes implementing the traits) 
> could format screen output from the result classes.
> Why do this?  It'd be a really nice way for test code to query things like 
> offsets and such after a test run.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7100) kafka.tools.GetOffsetShell with enable Kerberos Security on Kafka1.0

2018-06-26 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-7100.
--
Resolution: Duplicate

This is being tracked in KAFKA-5235.

> kafka.tools.GetOffsetShell with enable Kerberos Security on Kafka1.0  
> -
>
> Key: KAFKA-7100
> URL: https://issues.apache.org/jira/browse/KAFKA-7100
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 1.0.1
>Reporter: abdullah toraman
>Priority: Major
>
> Hi All,
> I enabled the Kerberos Authentication on Kafka 1.0. When I try to 
> kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list : 
> --topic  --time -1, It hits error. Here are the Error,
> [kafka@KLBIRKFKI1 kafka_2.11-1.0.1]$ bin/kafka-run-class.sh 
> kafka.tools.GetOffsetShell --broker-list klbirkfki1:9092 --topic abdullah 
> --time -1
> [2018-06-26 12:59:00,058] WARN Fetching topic metadata with correlation id 0 
> for topics [Set(abdullah)] from broker [BrokerEndPoint(0,klbirkfki1,9092)] 
> failed (kafka.client.ClientUtils$)
> java.io.EOFException
>     at 
> org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:124)
>     at 
> kafka.network.BlockingChannel.readCompletely(BlockingChannel.scala:131)
>     at kafka.network.BlockingChannel.receive(BlockingChannel.scala:122)
>     at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:82)
>     at 
> kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:79)
>     at kafka.producer.SyncProducer.send(SyncProducer.scala:124)
>     at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:63)
>     at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:98)
>     at kafka.tools.GetOffsetShell$.main(GetOffsetShell.scala:79)
>     at kafka.tools.GetOffsetShell.main(GetOffsetShell.scala)
> Exception in thread "main" kafka.common.KafkaException: fetching topic 
> metadata for topics [Set(abdullah)] from broker 
> [ArrayBuffer(BrokerEndPoint(0,klbirkfki1,9092))] failed
>     at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:77)
>     at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:98)
>     at kafka.tools.GetOffsetShell$.main(GetOffsetShell.scala:79)
>     at kafka.tools.GetOffsetShell.main(GetOffsetShell.scala)
> Caused by: java.io.EOFException
>     at 
> org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:124)
>     at 
> kafka.network.BlockingChannel.readCompletely(BlockingChannel.scala:131)
>     at kafka.network.BlockingChannel.receive(BlockingChannel.scala:122)
>     at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:82)
>     at 
> kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:79)
>     at kafka.producer.SyncProducer.send(SyncProducer.scala:124)
>     at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:63)
>     ... 3 more
> [kafka@KLBIRKFKI1 kafka_2.11-1.0.1]$ bin/kafka-run-class.sh 
> kafka.tools.GetOffsetShell --broker-list klbirkfki1:9092 --topic abdullah 
> --time -1
> [2018-06-26 12:59:00,058] WARN Fetching topic metadata with correlation id 0 
> for topics [Set(abdullah)] from broker [BrokerEndPoint(0,klbirkfki1,9092)] 
> failed (kafka.client.ClientUtils$)
> java.io.EOFException
>     at 
> org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:124)
>     at 
> kafka.network.BlockingChannel.readCompletely(BlockingChannel.scala:131)
>     at kafka.network.BlockingChannel.receive(BlockingChannel.scala:122)
>     at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:82)
>     at 
> kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:79)
>     at kafka.producer.SyncProducer.send(SyncProducer.scala:124)
>     at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:63)
>     at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:98)
>     at kafka.tools.GetOffsetShell$.main(GetOffsetShell.scala:79)
>     at kafka.tools.GetOffsetShell.main(GetOffsetShell.scala)
> Exception in thread "main" kafka.common.KafkaException: fetching topic 
> metadata for topics [Set(abdullah)] from broker 
> [ArrayBuffer(BrokerEndPoint(0,klbirkfki1,9092))] failed
>     at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:77)
>     at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:98)
>     at kafka.tools.GetOffsetShell$.main(GetOffsetShell.scala:79)
>     at kafka.tools.GetOffsetShell.main(GetOffsetShell.scala)
> Caused by: java.io.EOFException
>     at 
> 

[jira] [Comment Edited] (KAFKA-7099) KafkaLog4jAppender - not sending any message with level DEBUG

2018-06-26 Thread Manikumar (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523849#comment-16523849
 ] 

Manikumar edited comment on KAFKA-7099 at 6/26/18 3:10 PM:
---

Existing issue:  KAFKA-6415. This was temporarily handled in  KAFKA-6415 by 
changing the log level to WARN.


was (Author: omkreddy):
Existing issue:  KAFKA-6415. This temporarily handled in  KAFKA-6415 by 
changing the log level to WARN.

> KafkaLog4jAppender - not sending any message with level DEBUG
> -
>
> Key: KAFKA-7099
> URL: https://issues.apache.org/jira/browse/KAFKA-7099
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.10.2.0
>Reporter: Vincent Lebreil
>Priority: Major
>
> KafkaLog4jAppender can be stuck if it is defined at root category with level 
> DEBUG
> {{log4j.rootLogger=DEBUG, kafka}}
> {{log4j.appender.kafka=org.apache.kafka.log4jappender.KafkaLog4jAppender}}
> {quote}DEBUG org.apache.kafka.clients.producer.KafkaProducer:131 - Exception 
> occurred during message send:
> org.apache.kafka.common.errors.TimeoutException: Failed to update metadata 
> after 6 ms.
> {quote}
> KafkaLog4jAppender is using a KafkaProducer using itself Log4j with messages 
> at levels TRACE and DEBUG. The appender used in this case is also the 
> KafkaLog4jAppender.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7099) KafkaLog4jAppender - not sending any message with level DEBUG

2018-06-26 Thread Manikumar (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523849#comment-16523849
 ] 

Manikumar commented on KAFKA-7099:
--

Existing issue:  KAFKA-6415. This temporarily handled in  KAFKA-6415 by 
changing the log level to WARN.

> KafkaLog4jAppender - not sending any message with level DEBUG
> -
>
> Key: KAFKA-7099
> URL: https://issues.apache.org/jira/browse/KAFKA-7099
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.10.2.0
>Reporter: Vincent Lebreil
>Priority: Major
>
> KafkaLog4jAppender can be stuck if it is defined at root category with level 
> DEBUG
> {{log4j.rootLogger=DEBUG, kafka}}
> {{log4j.appender.kafka=org.apache.kafka.log4jappender.KafkaLog4jAppender}}
> {quote}DEBUG org.apache.kafka.clients.producer.KafkaProducer:131 - Exception 
> occurred during message send:
> org.apache.kafka.common.errors.TimeoutException: Failed to update metadata 
> after 6 ms.
> {quote}
> KafkaLog4jAppender is using a KafkaProducer using itself Log4j with messages 
> at levels TRACE and DEBUG. The appender used in this case is also the 
> KafkaLog4jAppender.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7091) AdminClient should handle FindCoordinatorResponse errors

2018-06-26 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-7091:
-
Fix Version/s: (was: 2.0.0)
   2.0.1

> AdminClient should handle FindCoordinatorResponse errors
> 
>
> Key: KAFKA-7091
> URL: https://issues.apache.org/jira/browse/KAFKA-7091
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Manikumar
>Assignee: Manikumar
>Priority: Major
> Fix For: 2.0.1
>
>
> Currently KafkaAdminClient.deleteConsumerGroups, listConsumerGroupOffsets 
> method implementation ignoring FindCoordinatorResponse errors. This causes 
> admin client request timeouts incase of authorization errors.  We should 
> handle these errors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-2488) System tests: updated console_consumer.py to support new consumer

2018-06-26 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-2488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2488.
--
Resolution: Fixed

Support was added in > 0.10.1 Kafka versions.

> System tests: updated console_consumer.py to support new consumer
> -
>
> Key: KAFKA-2488
> URL: https://issues.apache.org/jira/browse/KAFKA-2488
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
>Priority: Major
>
> Console consumer now supports new consumer
> Update console_consumer.py to allow this as well



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-2215) Improve Randomness for ConsoleConsumer

2018-06-26 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-2215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2215.
--
Resolution: Not A Problem

Closing inactive issue. Also the default for console consumer's 
enable.auto.commit is set to false for auto-generated group Ids. 

> Improve Randomness for ConsoleConsumer
> --
>
> Key: KAFKA-2215
> URL: https://issues.apache.org/jira/browse/KAFKA-2215
> Project: Kafka
>  Issue Type: Bug
>Reporter: Fabian Lange
>Priority: Major
>
> Right now the console consumer does a new Random().nextInt(100_000)
> https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/tools/ConsoleConsumer.scala#L123
> I would propose to use UUID.randomUUID().toString() instead.
> I know this is quite edgy, but Random has shown its quirks from time to time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7091) AdminClient should handle FindCoordinatorResponse errors

2018-06-23 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-7091:
-
Description: Currently KafkaAdminClient.deleteConsumerGroups, 
listConsumerGroupOffsets method implementations are ignoring 
FindCoordinatorResponse errors. This causes admin client request timeouts 
incase of authorization errors.  We should handle these errors.  (was: 
Currently KafkaAdminClient.deleteConsumerGroups, listConsumerGroupOffsets 
method implementations are ignoring FindCoordinatorResponse errors. This causes 
admin client request timeouts incase authorization errors.  We should handle 
these errors.)

> AdminClient should handle FindCoordinatorResponse errors
> 
>
> Key: KAFKA-7091
> URL: https://issues.apache.org/jira/browse/KAFKA-7091
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Manikumar
>Assignee: Manikumar
>Priority: Major
> Fix For: 2.0.0
>
>
> Currently KafkaAdminClient.deleteConsumerGroups, listConsumerGroupOffsets 
> method implementations are ignoring FindCoordinatorResponse errors. This 
> causes admin client request timeouts incase of authorization errors.  We 
> should handle these errors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7091) AdminClient should handle FindCoordinatorResponse errors

2018-06-23 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-7091:
-
Description: Currently KafkaAdminClient.deleteConsumerGroups, 
listConsumerGroupOffsets method implementation ignoring FindCoordinatorResponse 
errors. This causes admin client request timeouts incase of authorization 
errors.  We should handle these errors.  (was: Currently 
KafkaAdminClient.deleteConsumerGroups, listConsumerGroupOffsets method 
implementations are ignoring FindCoordinatorResponse errors. This causes admin 
client request timeouts incase of authorization errors.  We should handle these 
errors.)

> AdminClient should handle FindCoordinatorResponse errors
> 
>
> Key: KAFKA-7091
> URL: https://issues.apache.org/jira/browse/KAFKA-7091
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Manikumar
>Assignee: Manikumar
>Priority: Major
> Fix For: 2.0.0
>
>
> Currently KafkaAdminClient.deleteConsumerGroups, listConsumerGroupOffsets 
> method implementation ignoring FindCoordinatorResponse errors. This causes 
> admin client request timeouts incase of authorization errors.  We should 
> handle these errors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7091) AdminClient should handle FindCoordinatorResponse errors

2018-06-23 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-7091:
-
Description: Currently KafkaAdminClient.deleteConsumerGroups, 
listConsumerGroupOffsets method implementations are ignoring 
FindCoordinatorResponse errors. This causes admin client request timeouts 
incase authorization errors.  We should handle these errors.  (was: Currently 
KafkaAdminClient.deleteConsumerGroups, listConsumerGroupOffsets methods are 
ignoring FindCoordinatorResponse errors. We should handle these errors.)

> AdminClient should handle FindCoordinatorResponse errors
> 
>
> Key: KAFKA-7091
> URL: https://issues.apache.org/jira/browse/KAFKA-7091
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Manikumar
>Assignee: Manikumar
>Priority: Major
> Fix For: 2.0.0
>
>
> Currently KafkaAdminClient.deleteConsumerGroups, listConsumerGroupOffsets 
> method implementations are ignoring FindCoordinatorResponse errors. This 
> causes admin client request timeouts incase authorization errors.  We should 
> handle these errors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7091) AdminClient should handle FindCoordinatorResponse errors

2018-06-22 Thread Manikumar (JIRA)
Manikumar created KAFKA-7091:


 Summary: AdminClient should handle FindCoordinatorResponse errors
 Key: KAFKA-7091
 URL: https://issues.apache.org/jira/browse/KAFKA-7091
 Project: Kafka
  Issue Type: Improvement
Reporter: Manikumar
Assignee: Manikumar
 Fix For: 2.0.0


Currently KafkaAdminClient.deleteConsumerGroups, listConsumerGroupOffsets 
methods are ignoring FindCoordinatorResponse errors. We should handle these 
errors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7081) Add describe all topics API to AdminClient

2018-06-20 Thread Manikumar (JIRA)
Manikumar created KAFKA-7081:


 Summary: Add describe all topics API to AdminClient
 Key: KAFKA-7081
 URL: https://issues.apache.org/jira/browse/KAFKA-7081
 Project: Kafka
  Issue Type: Improvement
  Components: admin
Affects Versions: 2.0.0
Reporter: Manikumar
Assignee: Manikumar


Currently AdminClient supports describeTopics(Collection topicNames) 
method for topic
 descriptions and listTopics() for topic name listing.

To describe all topics, users currently use listTopics() to get all topic names 
and pass the name
 list to describeTopics. 

Since "describe all topics" is a common operation, We propose to add 
describeTopics() method to get all topic descriptions. This will be simple to 
use and avoids additional metadata requests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6564) Fix broken links in Dockerfile

2018-06-19 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-6564.
--
   Resolution: Fixed
Fix Version/s: 1.0.2

This will get fixed in upcoming 1.0.2 release.

> Fix broken links in Dockerfile
> --
>
> Key: KAFKA-6564
> URL: https://issues.apache.org/jira/browse/KAFKA-6564
> Project: Kafka
>  Issue Type: Test
>Reporter: Andriy Sorokhtey
>Priority: Minor
> Fix For: 1.0.2
>
>
> https://github.com/apache/kafka/blob/1.0.0/tests/docker/Dockerfile
> {noformat}
> # Install binary test dependencies.
> ENV MIRROR="http://mirrors.ocf.berkeley.edu/apache/;
> RUN mkdir -p "/opt/kafka-0.8.2.2" && curl -s 
> "${MIRROR}kafka/0.8.2.2/kafka_2.10-0.8.2.2.tgz" | tar xz --strip-components=1 
> -C "/opt/kafka-0.8.2.2"
> RUN mkdir -p "/opt/kafka-0.9.0.1" && curl -s 
> "${MIRROR}kafka/0.9.0.1/kafka_2.11-0.9.0.1.tgz" | tar xz --strip-components=1 
> -C "/opt/kafka-0.9.0.1"
> RUN mkdir -p "/opt/kafka-0.10.0.1" && curl -s 
> "${MIRROR}kafka/0.10.0.1/kafka_2.11-0.10.0.1.tgz" | tar xz 
> --strip-components=1 -C "/opt/kafka-0.10.0.1"
> RUN mkdir -p "/opt/kafka-0.10.1.1" && curl -s 
> "${MIRROR}kafka/0.10.1.1/kafka_2.11-0.10.1.1.tgz" | tar xz 
> --strip-components=1 -C "/opt/kafka-0.10.1.1"
> RUN mkdir -p "/opt/kafka-0.10.2.1" && curl -s 
> "${MIRROR}kafka/0.10.2.1/kafka_2.11-0.10.2.1.tgz" | tar xz 
> --strip-components=1 -C "/opt/kafka-0.10.2.1"
> RUN mkdir -p "/opt/kafka-0.11.0.0" && curl -s 
> "${MIRROR}kafka/0.11.0.0/kafka_2.11-0.11.0.0.tgz" | tar xz 
> --strip-components=1 -C "/opt/kafka-0.11.0.0"
> {noformat}
> This links seems to be broken and automated tests executed on docker fails 
> with error:
> {noformat}
> log: /bin/sh -c mkdir -p "/opt/kafka-0.8.2.2" && curl -s 
> "${MIRROR}kafka/0.8.2.2/kafka_2.10-0.8.2.2.tgz" | tar xz --strip-components=1 
> -C "/opt/kafka-0.8.2.2"' returned a non-zero code: 2
> {noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6737) Is Kafka imapcted by critical vulnerqbilty CVE-2018-7489

2018-06-19 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-6737.
--
   Resolution: Fixed
Fix Version/s: 2.0.0

This will be fixed part of upcoming 2.00, 1.1.1, 1.0.2 releases.

> Is Kafka imapcted by critical vulnerqbilty CVE-2018-7489
> 
>
> Key: KAFKA-6737
> URL: https://issues.apache.org/jira/browse/KAFKA-6737
> Project: Kafka
>  Issue Type: Bug
>  Components: packaging, security, unit tests
>Affects Versions: 0.10.1.0, 1.1.0, 1.0.1
>Reporter: Akansh Shandilya
>Priority: Critical
> Fix For: 2.0.0
>
>
> Kafka is using FasterXML jackson-databind before 2.8.11.1 and 2.9.x before 
> 2.9.5 , which allows unauthenticated remote code execution because of an 
> incomplete fix for the CVE-2017-7525 deserialization flaw. This is 
> exploitable by sending maliciously crafted JSON input to the readValue method 
> of the ObjectMapper, bypassing a blacklist that is ineffective if the c3p0 
> libraries are available in the classpath.
>  
> I have checked that all released versions of Kafka are using jackson-databind 
> before 2.8.11.1 and 2.9.x before 2.9.5.
> There are three open questions:
> Question1: Is Kafka imapcted by critical vulnerqbilty CVE-2018-7489?
> [http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-7489]
> Question2: If answer of first question is Yes. Is there any workaround to fix 
> it on released version. 
> Question3: If answer of first question is Yes. Should we fix it in future 
> versions?
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-7081) Add describe all topics API to AdminClient

2018-06-20 Thread Manikumar (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518410#comment-16518410
 ] 

Manikumar edited comment on KAFKA-7081 at 6/20/18 6:02 PM:
---

[~ijuma]  [~cmccabe] Would like to get your opinion before starting a small KIP.


was (Author: omkreddy):
[~ijuma]  [~cmccabe] Would like to get your opinion before starting a small KIP?

> Add describe all topics API to AdminClient
> --
>
> Key: KAFKA-7081
> URL: https://issues.apache.org/jira/browse/KAFKA-7081
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin
>Affects Versions: 2.0.0
>Reporter: Manikumar
>Assignee: Manikumar
>Priority: Minor
>  Labels: needs-kip
>
> Currently AdminClient supports describeTopics(Collection topicNames) 
> method for topic descriptions and listTopics() for topic name listing.
> To describe all topics, users currently use listTopics() to get all topic 
> names and pass the name list to describeTopics. 
> Since "describe all topics" is a common operation, We propose to add 
> describeTopics() method to get all topic descriptions. This will be simple to 
> use and avoids additional metadata requests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7081) Add describe all topics API to AdminClient

2018-06-20 Thread Manikumar (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518410#comment-16518410
 ] 

Manikumar commented on KAFKA-7081:
--

[~ijuma]  [~cmccabe] Would like to get your opinion before starting a small KIP?

> Add describe all topics API to AdminClient
> --
>
> Key: KAFKA-7081
> URL: https://issues.apache.org/jira/browse/KAFKA-7081
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin
>Affects Versions: 2.0.0
>Reporter: Manikumar
>Assignee: Manikumar
>Priority: Minor
>  Labels: needs-kip
>
> Currently AdminClient supports describeTopics(Collection topicNames) 
> method for topic descriptions and listTopics() for topic name listing.
> To describe all topics, users currently use listTopics() to get all topic 
> names and pass the name list to describeTopics. 
> Since "describe all topics" is a common operation, We propose to add 
> describeTopics() method to get all topic descriptions. This will be simple to 
> use and avoids additional metadata requests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7081) Add describe all topics API to AdminClient

2018-06-20 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-7081:
-
Description: 
Currently AdminClient supports describeTopics(Collection topicNames) 
method for topic descriptions and listTopics() for topic name listing.

To describe all topics, users currently use listTopics() to get all topic names 
and pass the name list to describeTopics. 

Since "describe all topics" is a common operation, We propose to add 
describeTopics() method to get all topic descriptions. This will be simple to 
use and avoids additional metadata requests.

  was:
Currently AdminClient supports describeTopics(Collection topicNames) 
method for topic
 descriptions and listTopics() for topic name listing.

To describe all topics, users currently use listTopics() to get all topic names 
and pass the name
 list to describeTopics. 

Since "describe all topics" is a common operation, We propose to add 
describeTopics() method to get all topic descriptions. This will be simple to 
use and avoids additional metadata requests.


> Add describe all topics API to AdminClient
> --
>
> Key: KAFKA-7081
> URL: https://issues.apache.org/jira/browse/KAFKA-7081
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin
>Affects Versions: 2.0.0
>Reporter: Manikumar
>Assignee: Manikumar
>Priority: Minor
>  Labels: needs-kip
>
> Currently AdminClient supports describeTopics(Collection topicNames) 
> method for topic descriptions and listTopics() for topic name listing.
> To describe all topics, users currently use listTopics() to get all topic 
> names and pass the name list to describeTopics. 
> Since "describe all topics" is a common operation, We propose to add 
> describeTopics() method to get all topic descriptions. This will be simple to 
> use and avoids additional metadata requests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-3018) Kafka producer hangs on producer.close() call if the producer topic contains single quotes in the topic name

2018-06-20 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3018.
--
Resolution: Duplicate

Resolving this as duplicate of KAFKA-5098.  Please reopen if you think otherwise

> Kafka producer hangs on producer.close() call if the producer topic contains 
> single quotes in the topic name
> 
>
> Key: KAFKA-3018
> URL: https://issues.apache.org/jira/browse/KAFKA-3018
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.10.0.0, 0.11.0.0, 1.0.0
>Reporter: kanav anand
>Priority: Major
>
> While creating topics with quotes in the name throws a exception but if you 
> try to close a producer configured with a topic name with quotes the producer 
> hangs.
> It can be easily replicated and verified by setting topic.name for a producer 
> with a string containing single quotes in it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7083) maxTickMessages in ConsumerGroup option

2018-06-21 Thread Manikumar (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518999#comment-16518999
 ] 

Manikumar commented on KAFKA-7083:
--

looks like you are using Node.js Kafka client. Can you post this query on 
Node.js client mailing list/github issue list

> maxTickMessages in ConsumerGroup option
> ---
>
> Key: KAFKA-7083
> URL: https://issues.apache.org/jira/browse/KAFKA-7083
> Project: Kafka
>  Issue Type: Bug
>Reporter: Rahul
>Priority: Major
>
> Hello,
> We are using Kafka v2.11.1.0.0. We have only one partition for our topic. 
> While consuming a record from the topic, I am setting maxTickMessages = 1 in 
> Kafka consumer group. It returns me 2 records. I am not getting why it is 
> giving me 1 extra record of the mentioned size. Whenever I increase the 
> number in maxTickMessages, it gives me one extra record.
> Can someone please suggest me a solution to this issue?
> Thanks & Regards,
> Rahul Singh



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-5080) Convert ProducerBounceTest to use the new KafkaConsumer

2018-06-26 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5080.
--
Resolution: Fixed

ProducerBounceTest is removed as part of old consumer changes.

> Convert ProducerBounceTest to use the new KafkaConsumer
> ---
>
> Key: KAFKA-5080
> URL: https://issues.apache.org/jira/browse/KAFKA-5080
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Apurva Mehta
>Assignee: Apurva Mehta
>Priority: Major
>
> In KAFKA-5079, the `SocketTimeoutException` indicates that the consumer is 
> stuck waiting for a response from a particular broker, even after the broker 
> has been bounced. Since a given broker in that test always uses the same port 
> is is possible that this is a symptom of a bug in the SimpleConsumer where it 
> doesn't detect the bounce (and hence disconnect), causing it to time out. 
> We should use the new consumer to rule out a client bug, or fix it if it also 
> exists in the new consumer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-5079) ProducerBounceTest fails occasionally with a SocketTimeoutException

2018-06-26 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5079.
--
Resolution: Fixed

ProducerBounceTest is removed as part old consumer changes.

> ProducerBounceTest fails occasionally with a SocketTimeoutException
> ---
>
> Key: KAFKA-5079
> URL: https://issues.apache.org/jira/browse/KAFKA-5079
> Project: Kafka
>  Issue Type: Bug
>Reporter: Apurva Mehta
>Priority: Major
>
> {noformat}
> java.net.SocketTimeoutException
>   at 
> sun.nio.ch.SocketAdaptor$SocketInputStream.read(SocketAdaptor.java:229)
>   at sun.nio.ch.ChannelInputStream.read(ChannelInputStream.java:103)
>   at 
> java.nio.channels.Channels$ReadableByteChannelImpl.read(Channels.java:385)
>   at 
> org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:85)
>   at 
> kafka.network.BlockingChannel.readCompletely(BlockingChannel.scala:129)
>   at kafka.network.BlockingChannel.receive(BlockingChannel.scala:120)
>   at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:100)
>   at 
> kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:84)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SimpleConsumer.scala:133)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:133)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:133)
>   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:32)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply$mcV$sp(SimpleConsumer.scala:132)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:132)
>   at 
> kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:132)
>   at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:32)
>   at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:131)
>   at 
> kafka.api.ProducerBounceTest$$anonfun$2.apply(ProducerBounceTest.scala:116)
>   at 
> kafka.api.ProducerBounceTest$$anonfun$2.apply(ProducerBounceTest.scala:113)
> {noformat}
> This is expected occasionally, since the ports are preallocated and the 
> brokers are bounced in quick succession. Here is the relevant comment from 
> the code: 
> {noformat}
>   // This is the one of the few tests we currently allow to preallocate 
> ports, despite the fact that this can result in transient
>   // failures due to ports getting reused. We can't use random ports because 
> of bad behavior that can result from bouncing
>   // brokers too quickly when they get new, random ports. If we're not 
> careful, the client can end up in a situation
>   // where metadata is not refreshed quickly enough, and by the time it's 
> actually trying to, all the servers have
>   // been bounced and have new addresses. None of the bootstrap nodes or 
> current metadata can get them connected to a
>   // running server.
>   //
>   // Since such quick rotation of servers is incredibly unrealistic, we allow 
> this one test to preallocate ports, leaving
>   // a small risk of hitting errors due to port conflicts. Hopefully this is 
> infrequent enough to not cause problems.
> {noformat}
> We should try to look into handling this exception better so that the test 
> doesn't fail occasionally. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6211) PartitionAssignmentState is protected whereas KafkaConsumerGroupService.describeGroup() returns PartitionAssignmentState Object

2018-06-26 Thread Manikumar (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523616#comment-16523616
 ] 

Manikumar commented on KAFKA-6211:
--

ConsumerGroupCommand internal classes/methods are not public API. You 
encouraged to use kafka AdminClient consumer group management methods.

> PartitionAssignmentState is protected whereas 
> KafkaConsumerGroupService.describeGroup() returns PartitionAssignmentState 
> Object
> ---
>
> Key: KAFKA-6211
> URL: https://issues.apache.org/jira/browse/KAFKA-6211
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 0.10.0.0, 0.10.0.1, 0.10.1.0, 0.10.1.1, 0.10.2.0, 
> 0.10.2.1, 0.11.0.0, 0.11.0.1, 1.0.0
>Reporter: Subhransu Acharya
>Priority: Critical
>  Labels: patch
> Attachments: a.png
>
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
>  KafkaConsumerGroupService.describeGroup() is returning 
> Tuple2, Option>> but 
> ConsumerGroupCommand has PartitionAssignmentState as a protected class inside 
> it.
> There is no way to create an instance of PartitionAssignmentState.
> make it default in order to use the describe command.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-6211) PartitionAssignmentState is protected whereas KafkaConsumerGroupService.describeGroup() returns PartitionAssignmentState Object

2018-06-26 Thread Manikumar (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523616#comment-16523616
 ] 

Manikumar edited comment on KAFKA-6211 at 6/26/18 11:56 AM:


ConsumerGroupCommand internal classes/methods are not public API. You are 
encouraged to use kafka AdminClient consumer group management methods.


was (Author: omkreddy):
ConsumerGroupCommand internal classes/methods are not public API. You 
encouraged to use kafka AdminClient consumer group management methods.

> PartitionAssignmentState is protected whereas 
> KafkaConsumerGroupService.describeGroup() returns PartitionAssignmentState 
> Object
> ---
>
> Key: KAFKA-6211
> URL: https://issues.apache.org/jira/browse/KAFKA-6211
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 0.10.0.0, 0.10.0.1, 0.10.1.0, 0.10.1.1, 0.10.2.0, 
> 0.10.2.1, 0.11.0.0, 0.11.0.1, 1.0.0
>Reporter: Subhransu Acharya
>Priority: Critical
>  Labels: patch
> Attachments: a.png
>
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
>  KafkaConsumerGroupService.describeGroup() is returning 
> Tuple2, Option>> but 
> ConsumerGroupCommand has PartitionAssignmentState as a protected class inside 
> it.
> There is no way to create an instance of PartitionAssignmentState.
> make it default in order to use the describe command.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-4513) Support migration of old consumers to new consumers without downtime

2018-06-26 Thread Manikumar (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-4513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523625#comment-16523625
 ] 

Manikumar commented on KAFKA-4513:
--

[~ijuma] Can we close this JIRA?

> Support migration of old consumers to new consumers without downtime
> 
>
> Key: KAFKA-4513
> URL: https://issues.apache.org/jira/browse/KAFKA-4513
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Ismael Juma
>Assignee: Onur Karaman
>Priority: Major
>
> Some ideas were discussed in the following thread:
> http://markmail.org/message/ovngfw3ibixlquxh



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-6835) Enable topic unclean leader election to be enabled without controller change

2018-04-29 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-6835:


Assignee: Manikumar

> Enable topic unclean leader election to be enabled without controller change
> 
>
> Key: KAFKA-6835
> URL: https://issues.apache.org/jira/browse/KAFKA-6835
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Rajini Sivaram
>Assignee: Manikumar
>Priority: Major
>
> Dynamic update of broker's default unclean.leader.election.enable will be 
> processed without controller change (KAFKA-6526). We should probably do the 
> same for topic overrides as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-4701) Allow kafka brokers to dynamically reload truststore without restarting.

2018-05-01 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459756#comment-16459756
 ] 

Manikumar commented on KAFKA-4701:
--

Can this be resolved as duplicate of KAFKA-6810?

> Allow kafka brokers to dynamically reload truststore without restarting.
> 
>
> Key: KAFKA-4701
> URL: https://issues.apache.org/jira/browse/KAFKA-4701
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Reporter: Allen Xiang
>Priority: Major
>  Labels: security
> Fix For: 2.0.0
>
>
> Right now in order to add SSL clients(update broker truststores), a rolling 
> restart of all brokers is required. This is very time consuming and 
> unnecessary. A dynamic truststore manager is needed to reload truststore from 
> file system without restarting brokers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-3143) inconsistent state in ZK when all replicas are dead

2018-05-01 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16459817#comment-16459817
 ] 

Manikumar commented on KAFKA-3143:
--

>From 1.1.0 (KAFKA-5083),  last ISR is preserved in ZK, irrespective of unclean 
>leader election is enabled or not  .

> inconsistent state in ZK when all replicas are dead
> ---
>
> Key: KAFKA-3143
> URL: https://issues.apache.org/jira/browse/KAFKA-3143
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jun Rao
>Assignee: Ismael Juma
>Priority: Major
>  Labels: reliability
> Fix For: 2.0.0
>
>
> This issue can be recreated in the following steps.
> 1. Start 3 brokers, 1, 2 and 3.
> 2. Create a topic with a single partition and 2 replicas, say on broker 1 and 
> 2.
> If we stop both replicas 1 and 2, depending on where the controller is, the 
> leader and isr stored in ZK in the end are different.
> If the controller is on broker 3, what's stored in ZK will be -1 for leader 
> and an empty set for ISR.
> On the other hand, if the controller is on broker 2 and we stop broker 1 
> followed by broker 2, what's stored in ZK will be 2 for leader and 2 for ISR.
> The issue is that in the first case, the controller will call 
> ReplicaStateMachine to transition to OfflineReplica, which will change the 
> leader and isr. However, in the second case, the controller fails over, but 
> we don't transition ReplicaStateMachine to OfflineReplica during controller 
> initialization.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-3417) Invalid characters in config properties not being validated?

2018-05-01 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3417.
--
   Resolution: Fixed
 Assignee: Mickael Maison  (was: Grant Henke)
Fix Version/s: 2.0.0

> Invalid characters in config properties not being validated?
> 
>
> Key: KAFKA-3417
> URL: https://issues.apache.org/jira/browse/KAFKA-3417
> Project: Kafka
>  Issue Type: Bug
>  Components: config
>Affects Versions: 0.9.0.1
>Reporter: Byron Ruth
>Assignee: Mickael Maison
>Priority: Minor
> Fix For: 2.0.0
>
>
> I ran into an error using a {{client.id}} with invalid characters (per the 
> [config 
> validator|https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/common/Config.scala#L25-L35]).
>  I was able to get that exact error using the {{kafka-console-consumer}} 
> script, presumably because I supplied a consumer properties file and it 
> validated prior to hitting the server. However, when I use a client library 
> (sarama for Go in this case), an error in the metrics subsystem is thrown 
> [here|https://github.com/apache/kafka/blob/977ebbe9bafb6c1a6e1be69620f745712118fe80/clients/src/main/java/org/apache/kafka/common/metrics/Metrics.java#L380].
> The stacktrace is:
> {code:title=stack.java}
> [2016-03-17 17:43:47,342] ERROR [KafkaApi-0] error when handling request 
> Name: FetchRequest; Version: 0; CorrelationId: 2; ClientId: foo:bar; 
> ReplicaId: -1; MaxWait: 250 ms; MinBytes: 1 bytes; RequestInfo: [foo,0] -> 
> PartitionFetchInfo(0,32768) (kafka.server.KafkaApis)
> org.apache.kafka.common.KafkaException: Error creating mbean attribute for 
> metricName :MetricName [name=throttle-time, group=Fetch, description=Tracking 
> average throttle-time per client, tags={client-id=foo:bar}]
>   at 
> org.apache.kafka.common.metrics.JmxReporter.addAttribute(JmxReporter.java:113)
>   at 
> org.apache.kafka.common.metrics.JmxReporter.metricChange(JmxReporter.java:76)
>   at 
> org.apache.kafka.common.metrics.Metrics.registerMetric(Metrics.java:288)
>   at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:177)
>   at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:162)
> ...
> {code}
> Assuming the cause os related to the invalid characters, when the request 
> header is decoded, the {{clientId}} should be validated prior to being used?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-6091) Authorization API is called hundred's of times when there are no privileges

2017-10-20 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16212476#comment-16212476
 ] 

Manikumar edited comment on KAFKA-6091 at 10/20/17 10:36 AM:
-

This behavior is changed in KAFKA-5547.  Now clients will fail after any topic 
authorization errors.


was (Author: omkreddy):
This behavior is changed in KAFKA-5547. 

> Authorization API is called hundred's of times when there are no privileges
> ---
>
> Key: KAFKA-6091
> URL: https://issues.apache.org/jira/browse/KAFKA-6091
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.11.0.0
>Reporter: kalyan kumar kalvagadda
>
> This issue is observed with kafka/sentry integration. When sentry does not 
> have any permissions for a topic and there is a producer trying to add a 
> message to a topic, sentry returns failure but Kafka is not able to handle it 
> properly and is ending up invoking sentry Auth API ~564 times. This will 
> choke authorization service.
> Here are the list of privileges that are needed for a producer to add a 
> message to a topic
> In this example "192.168.0.3" is hostname and topic name is "tOpIc1"
> {noformat}
> HOST=192.168.0.3->Topic=tOpIc1->action=DESCRIBE
> HOST=192.168.0.3->Cluster=kafka-cluster->action=CREATE
> HOST=192.168.0.3->Topic=tOpIc1->action=WRITE
> {noformat}
> This problem is reported in this jira is seen when there are no permissions. 
> Movement a DESCRIBE permission is added, this issue is not seen. 
> Authorization fails but kafka doesn't bombard with he more requests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6091) Authorization API is called hundred's of times when there are no privileges

2017-10-20 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16212476#comment-16212476
 ] 

Manikumar commented on KAFKA-6091:
--

This behavior is changed in KAFKA-5547. 

> Authorization API is called hundred's of times when there are no privileges
> ---
>
> Key: KAFKA-6091
> URL: https://issues.apache.org/jira/browse/KAFKA-6091
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.11.0.0
>Reporter: kalyan kumar kalvagadda
>
> This issue is observed with kafka/sentry integration. When sentry does not 
> have any permissions for a topic and there is a producer trying to add a 
> message to a topic, sentry returns failure but Kafka is not able to handle it 
> properly and is ending up invoking sentry Auth API ~564 times. This will 
> choke authorization service.
> Here are the list of privileges that are needed for a producer to add a 
> message to a topic
> In this example "192.168.0.3" is hostname and topic name is "tOpIc1"
> {noformat}
> HOST=192.168.0.3->Topic=tOpIc1->action=DESCRIBE
> HOST=192.168.0.3->Cluster=kafka-cluster->action=CREATE
> HOST=192.168.0.3->Topic=tOpIc1->action=WRITE
> {noformat}
> This problem is reported in this jira is seen when there are no permissions. 
> Movement a DESCRIBE permission is added, this issue is not seen. 
> Authorization fails but kafka doesn't bombard with he more requests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-6072) Use ZookeeperClient in GroupCoordinator and TransactionCoordinator

2017-10-19 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-6072:


Assignee: Manikumar

> Use ZookeeperClient in GroupCoordinator and TransactionCoordinator
> --
>
> Key: KAFKA-6072
> URL: https://issues.apache.org/jira/browse/KAFKA-6072
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Affects Versions: 1.1.0
>Reporter: Jun Rao
>Assignee: Manikumar
>
> We want to replace the usage of ZkUtils in GroupCoordinator and 
> TransactionCoordinator with ZookeeperClient.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KAFKA-5646) Use async ZookeeperClient for DynamicConfigManager

2017-10-28 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar reassigned KAFKA-5646:


Assignee: Manikumar

> Use async ZookeeperClient for DynamicConfigManager
> --
>
> Key: KAFKA-5646
> URL: https://issues.apache.org/jira/browse/KAFKA-5646
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Affects Versions: 1.1.0
>Reporter: Ismael Juma
>Assignee: Manikumar
> Fix For: 1.1.0
>
>
> We want to replace the usage of ZkUtils with ZookeeperClient in 
> DynamicConfigManager.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-6380) transient failure in MetricsTest.testMetrics

2017-12-22 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-6380.
--
Resolution: Won't Fix

Related code is getting changed in KAFKA-6320. 

>  transient failure in MetricsTest.testMetrics
> -
>
> Key: KAFKA-6380
> URL: https://issues.apache.org/jira/browse/KAFKA-6380
> Project: Kafka
>  Issue Type: Bug
>Reporter: Manikumar
>Priority: Minor
>
> java.lang.AssertionError: ZooKeeper latency not recorded
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at kafka.api.MetricsTest.verifyBrokerZkMetrics(MetricsTest.scala:213)
>   at kafka.api.MetricsTest.testMetrics(MetricsTest.scala:95)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2329) Consumers balance fails when multiple consumers are started simultaneously.

2018-01-10 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2329.
--
Resolution: Auto Closed

Closing inactive issue. The old consumer is no longer supported, please upgrade 
to the Java consumer whenever possible.

> Consumers balance fails when multiple consumers are started simultaneously.
> ---
>
> Key: KAFKA-2329
> URL: https://issues.apache.org/jira/browse/KAFKA-2329
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.8.1.1, 0.8.2.1
>Reporter: Ze'ev Eli Klapow
>Assignee: Ze'ev Eli Klapow
>  Labels: consumer, patch
> Attachments: zookeeper-consumer-connector-epoch-node.patch
>
>
> During consumer startup a race condition can occur if multiple consumers are 
> started (nearly) simultaneously. 
> If a second consumer is started while the first consumer is in the middle of 
> {{zkClient.subscribeChildChanges}} the first consumer will never see the 
> registration of the second consumer, because the consumer registration node 
> for the second consumer will be unwatched, and no new child will be 
> registered later. This causes the first consumer to own all partitions, and 
> then never release ownership causing the second consumer to fail rebalancing.
> The attached patch solves this by using an "epoch" node which all consumers 
> watch and update to trigger  a rebalance. When a rebalance is triggered we 
> check the consumer registrations against a cached state, to avoid unnecessary 
> rebalances. For safety, we also periodically check the consumer registrations 
> and rebalance. We have been using this patch in production at HubSpot for a 
> while and it has eliminated all rebalance issues.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2331) Kafka does not spread partitions in a topic among all consumers evenly

2018-01-10 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2331.
--
Resolution: Auto Closed

Closing inactive issue. The old consumer is no longer supported, please upgrade 
to the Java consumer whenever possible.

> Kafka does not spread partitions in a topic among all consumers evenly
> --
>
> Key: KAFKA-2331
> URL: https://issues.apache.org/jira/browse/KAFKA-2331
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer
>Affects Versions: 0.8.1.1
>Reporter: Stefan Miklosovic
>
> I want to have 1 topic with 10 partitions. I am using default configuration 
> of Kafka. I create 1 topic with 10 partitions by that helper script and now I 
> am about to produce messages to it.
> The thing is that even all partitions are indeed consumed, some consumers 
> have more then 1 partition assigned even I have number of consumer threads 
> equal to partitions in a topic hence some threads are idle.
> Let's describe it in more detail.
> I know that common stuff that you need one consumer thread per partition. I 
> want to be able to commit offsets per partition and this is possible only 
> when I have 1 thread per consumer connector per partition (I am using high 
> level consumer).
> So I create 10 threads, in each thread I am calling 
> Consumer.createJavaConsumerConnector() where I am doing this
> topicCountMap.put("mytopic", 1);
> and in the end I have 1 iterator which consumes messages from 1 partition.
> When I do this 10 times, I have 10 consumers, consumer per thread per 
> partition where I can commit offsets independently per partition because if I 
> put different number from 1 in topic map, I would end up with more then 1 
> consumer thread for that topic for given consumer instance so if I am about 
> to commit offsets with created consumer instance, it would commit them for 
> all threads which is not desired.
> But the thing is that when I use consumers, only 7 consumers are involved and 
> it seems that other consumer threads are idle but I do not know why.
> The thing is that I am creating these consumer threads in a loop. So I start 
> first thread (submit to executor service), then another, then another and so 
> on.
> So the scenario is that first consumer gets all 10 partitions, then 2nd 
> connects so it is splits between these two to 5 and 5 (or something similar), 
> then other threads are connecting.
> I understand this as a partition rebalancing among all consumers so it 
> behaves well in such sense that if more consumers are being created, 
> partition rebalancing occurs between these consumers so every consumer should 
> have some partitions to operate upon.
> But from the results I see that there is only 7 consumers and according to 
> consumed messages it seems they are split like 3,2,1,1,1,1,1 partition-wise. 
> Yes, these 7 consumers covered all 10 partitions, but why consumers with more 
> then 1 partition do no split and give partitions to remaining 3 consumers?
> I am pretty much wondering what is happening with remaining 3 threads and why 
> they do not "grab" partitions from consumers which have more then 1 partition 
> assigned.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-2025) In multi-consumer setup - explicit commit, commits on all partitions

2018-01-10 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2025.
--
Resolution: Auto Closed

Closing inactive issue. The old consumer is no longer supported, please upgrade 
to the Java consumer whenever possible.

> In multi-consumer setup - explicit commit, commits on all partitions
> 
>
> Key: KAFKA-2025
> URL: https://issues.apache.org/jira/browse/KAFKA-2025
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.8.2.0
> Environment: 1. Tested in Windows
> 2. Not tested on Linux
>Reporter: Pradeep G
>Assignee: Neha Narkhede
>Priority: Critical
>
> In a setup where there are two consumers C1 & C2 belonging to consumer group 
> CG, two partitions P1 & P2; with auto-commit disabled.
> An explicit commit on ConsumerConnect commits on all the consumers i.e. a 
> commit called by C1 commits all messages being processed by other consumers 
> too here C2. 
> Ideally C1 should be able to commit only those messages it has consumed and 
> not what is being processed by C2.  The effect of this behavior is that; 
> suppose C2 crashes while processing message M after C1 commits, is that 
> message M being processed by C2 is not available on recovery and is lost 
> forever; and in kafka M is marked as consumed.
> I read that this would be addressed in the rewrite - 
> https://cwiki.apache.org/confluence/display/KAFKA/Client+Rewrite#ClientRewrite-ConsumerAPI
> Any thoughts on which release this would be addressed ?.  A quick response 
> would be greatly appreciated.
> Thanks,
> Pradeep



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-1229) Reload broker config without a restart

2018-01-10 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1229.
--
Resolution: Duplicate

Resolving as duplicate of KIP-226/KAFKA-6240. Pls reopen of any concern.

> Reload broker config without a restart
> --
>
> Key: KAFKA-1229
> URL: https://issues.apache.org/jira/browse/KAFKA-1229
> Project: Kafka
>  Issue Type: Wish
>  Components: config
>Affects Versions: 0.8.0
>Reporter: Carlo Cabanilla
>Priority: Minor
>
> In order to minimize client disruption, ideally you'd be able to reload 
> broker config without having to restart the server. On *nix system the 
> convention is to have the process reread its configuration if it receives a 
> SIGHUP signal.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5850) Py4JJavaError: An error occurred while calling o40.loadClass. : java.lang.ClassNotFoundException: org.apache.spark.streaming.kafka.KafkaUtilsPythonHelper

2018-01-09 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5850.
--
Resolution: Invalid

 Please reopen if you think the issue is related to kafka 


> Py4JJavaError: An error occurred while calling o40.loadClass. : 
> java.lang.ClassNotFoundException: 
> org.apache.spark.streaming.kafka.KafkaUtilsPythonHelper
> -
>
> Key: KAFKA-5850
> URL: https://issues.apache.org/jira/browse/KAFKA-5850
> Project: Kafka
>  Issue Type: Bug
>Reporter: Saurabh Bidwai
>
> ---
> Py4JJavaError Traceback (most recent call last)
>  in ()
> > 1 streamer(sc)
>  in streamer(sc)
>   5 pwords = 
> load_wordlist("/home/daasuser/twitter/kafkatweets/Twitter-Sentiment-Analysis-Using-Spark-Streaming-And-Kafka/Dataset/positive.txt")
>   6 nwords = 
> load_wordlist("/home/daasuser/twitter/kafkatweets/Twitter-Sentiment-Analysis-Using-Spark-Streaming-And-Kafka/Dataset/negative.txt")
> > 7 counts = stream(ssc, pwords, nwords, 600)
>   8 make_plot(counts)
>  in stream(ssc, pwords, nwords, duration)
>   1 def stream(ssc, pwords, nwords, duration):
> > 2 kstream = KafkaUtils.createDirectStream(ssc, topics = 
> ['twitterstream'], kafkaParams = {"metadata.broker.list": 
> ["dn1001:6667","dn2001:6667","dn3001:6667","dn4001:6667"]})
>   3 tweets = kstream.map(lambda x: x[1].encode("utf-8", "ignore"))
>   4 
>   5 # Each element of tweets will be the text of a tweet.
> /usr/hdp/current/spark-client/python/lib/pyspark.zip/pyspark/streaming/kafka.py
>  in createDirectStream(ssc, topics, kafkaParams, fromOffsets, keyDecoder, 
> valueDecoder, messageHandler)
> 150 if 'ClassNotFoundException' in str(e.java_exception):
> 151 KafkaUtils._printErrorMsg(ssc.sparkContext)
> --> 152 raise e
> 153 
> 154 stream = DStream(jstream, ssc, ser).map(func)
> Py4JJavaError: An error occurred while calling o40.loadClass.
> : java.lang.ClassNotFoundException: 
> org.apache.spark.streaming.kafka.KafkaUtilsPythonHelper
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
>   at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
>   at py4j.Gateway.invoke(Gateway.java:259)
>   at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
>   at py4j.commands.CallCommand.execute(CallCommand.java:79)
>   at py4j.GatewayConnection.run(GatewayConnection.java:209)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-3893) Kafka Broker ID disappears from /brokers/ids

2018-01-09 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3893.
--
Resolution: Fixed

 Please reopen if the issue still exists. 


> Kafka Broker ID disappears from /brokers/ids
> 
>
> Key: KAFKA-3893
> URL: https://issues.apache.org/jira/browse/KAFKA-3893
> Project: Kafka
>  Issue Type: Bug
>Reporter: chaitra
>Priority: Critical
>
> Kafka version used : 0.8.2.1 
> Zookeeper version: 3.4.6
> We have scenario where kafka 's broker in  zookeeper path /brokers/ids just 
> disappears.
> We see the zookeeper connection active and no network issue.
> The zookeeper conection timeout is set to 6000ms in server.properties
> Hence Kafka not participating in cluster



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-6312) Add documentation about kafka-consumer-groups.sh's ability to set/change offsets

2018-01-09 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16318089#comment-16318089
 ] 

Manikumar commented on KAFKA-6312:
--

[~prasanna1433]  You can follow below links:
http://kafka.apache.org/contributing
https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Website+Documentation+Changes

> Add documentation about kafka-consumer-groups.sh's ability to set/change 
> offsets
> 
>
> Key: KAFKA-6312
> URL: https://issues.apache.org/jira/browse/KAFKA-6312
> Project: Kafka
>  Issue Type: Improvement
>  Components: documentation
>Reporter: James Cheng
>  Labels: newbie
>
> KIP-122 added the ability for kafka-consumer-groups.sh to reset/change 
> consumer offsets, at a fine grained level.
> There is documentation on it in the kafka-consumer-groups.sh usage text. 
> There is no such documentation on the kafka.apache.org website. We should add 
> some documentation to the website, so that users can read about the 
> functionality without having the tools installed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-6171) [1.0.0] Logging is broken with Windows and Java 9

2018-01-09 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-6171.
--
Resolution: Duplicate

> [1.0.0] Logging is broken with Windows and Java 9
> -
>
> Key: KAFKA-6171
> URL: https://issues.apache.org/jira/browse/KAFKA-6171
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 1.0.0
> Environment: Windows 10, Java 9.0.1 and 10 build 29
>Reporter: Juergen Zimmermann
>
> When I start a Kafka server on Windows using Java 9.0.1 (same for 10 build 
> 29), then I get the following error:
> {{[2017-11-04 10:05:11,457] WARN Error processing 
> kafka.log:type=LogManager,name=LogDirectoryOffline,logDirectory=C:\Zimmermann\kafka\logs
>  (com.yammer.metrics.reporting.JmxReporter)
> javax.management.MalformedObjectNameException: Invalid character ':' in value 
> part of property
> at 
> java.management/javax.management.ObjectName.construct(ObjectName.java:621)
> at 
> java.management/javax.management.ObjectName.(ObjectName.java:1406)
> at 
> com.yammer.metrics.reporting.JmxReporter.onMetricAdded(JmxReporter.java:395)
> at 
> com.yammer.metrics.core.MetricsRegistry.notifyMetricAdded(MetricsRegistry.java:516)
> at 
> com.yammer.metrics.core.MetricsRegistry.getOrAdd(MetricsRegistry.java:491)
> at 
> com.yammer.metrics.core.MetricsRegistry.newGauge(MetricsRegistry.java:79)
> at 
> kafka.metrics.KafkaMetricsGroup$class.newGauge(KafkaMetricsGroup.scala:74)
> at kafka.log.LogManager.newGauge(LogManager.scala:50)
> at kafka.log.LogManager$$anonfun$6.apply(LogManager.scala:117)
> at kafka.log.LogManager$$anonfun$6.apply(LogManager.scala:116)
> at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
> at kafka.log.LogManager.(LogManager.scala:116)
> at kafka.log.LogManager$.apply(LogManager.scala:799)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:222)
> at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
> at kafka.Kafka$.main(Kafka.scala:92)
> at kafka.Kafka.main(Kafka.scala)}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6452) Add documentation for delegation token authentication mechanism

2018-01-16 Thread Manikumar (JIRA)
Manikumar created KAFKA-6452:


 Summary: Add documentation for delegation token authentication 
mechanism
 Key: KAFKA-6452
 URL: https://issues.apache.org/jira/browse/KAFKA-6452
 Project: Kafka
  Issue Type: Sub-task
Reporter: Manikumar






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-4542) Add authentication based on delegation token.

2018-01-16 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-4542:
-
Fix Version/s: 1.1.0

> Add authentication based on delegation token.
> -
>
> Key: KAFKA-4542
> URL: https://issues.apache.org/jira/browse/KAFKA-4542
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ashish Singh
>Assignee: Manikumar
>Priority: Major
> Fix For: 1.1.0
>
>
> Add authentication based on delegation token.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-4542) Add authentication based on delegation token.

2018-01-16 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4542.
--
Resolution: Fixed

Fixed via [https://github.com/apache/kafka/pull/3616]

> Add authentication based on delegation token.
> -
>
> Key: KAFKA-4542
> URL: https://issues.apache.org/jira/browse/KAFKA-4542
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ashish Singh
>Assignee: Manikumar
>Priority: Major
>
> Add authentication based on delegation token.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6447) Add Delegation Token Operations to KafkaAdminClient

2018-01-15 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-6447:
-
Description: 
This JIRA is about adding delegation token operations to the new Admin Client 
API.

KIP: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-249%3A+Add+Delegation+Token+Operations+to+KafkaAdminClient

  was:This JIRA is about adding delegation token operations to the new Admin 
Client API.


> Add Delegation Token Operations to KafkaAdminClient
> ---
>
> Key: KAFKA-6447
> URL: https://issues.apache.org/jira/browse/KAFKA-6447
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Manikumar
>Assignee: Manikumar
>Priority: Major
>
> This JIRA is about adding delegation token operations to the new Admin Client 
> API.
> KIP: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-249%3A+Add+Delegation+Token+Operations+to+KafkaAdminClient



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6447) Add Delegation Token Operations to KafkaAdminClient

2018-01-15 Thread Manikumar (JIRA)
Manikumar created KAFKA-6447:


 Summary: Add Delegation Token Operations to KafkaAdminClient
 Key: KAFKA-6447
 URL: https://issues.apache.org/jira/browse/KAFKA-6447
 Project: Kafka
  Issue Type: Sub-task
Reporter: Manikumar
Assignee: Manikumar


This JIRA is about adding delegation token operations to the new Admin Client 
API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


<    1   2   3   4   5   6   7   8   9   10   >