[jira] [Created] (KAFKA-7817) Multiple Consumer Group Management with Regex
Alex Dunayevsky created KAFKA-7817: -- Summary: Multiple Consumer Group Management with Regex Key: KAFKA-7817 URL: https://issues.apache.org/jira/browse/KAFKA-7817 Project: Kafka Issue Type: New Feature Components: tools Affects Versions: 2.1.0 Reporter: Alex Dunayevsky New feature: Multiple Consumer Group Management with regular expressions (kafka-consumer-groups.sh). //TODO: Provide ConsumerGroupCommand with ability to query/manage multiple consumer groups using a single regex pattern. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-7471) Multiple Consumer Group Management (Describe, Reset, Delete)
Alex Dunayevsky created KAFKA-7471: -- Summary: Multiple Consumer Group Management (Describe, Reset, Delete) Key: KAFKA-7471 URL: https://issues.apache.org/jira/browse/KAFKA-7471 Project: Kafka Issue Type: New Feature Components: tools Affects Versions: 2.0.0, 1.0.0 Reporter: Alex Dunayevsky Assignee: Alex Dunayevsky Fix For: 2.0.1 Functionality needed: * Describe/Delete/Reset offsets on multiple consumer groups at a time (including each group by repeating `--group` parameter) * Describe/Delete/Reset offsets on ALL consumer groups at a time (add key `--groups-all`, similar to `--topics-all`) * Generate CSV for multiple consumer groups What are the benifits? * No need to start a new JVM to perform each query on every single consumer group * Abiltity to query groups by their status (for instance, `-v grepping` by `Stable` to spot problematic/dead/empty groups) * Ability to export offsets to reset for multiple consumer groups to a CSV file (needs CSV generation export/import format rework) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-6743) ConsumerPerformance fails to consume all messages on topics with large number of partitions
Alex Dunayevsky created KAFKA-6743: -- Summary: ConsumerPerformance fails to consume all messages on topics with large number of partitions Key: KAFKA-6743 URL: https://issues.apache.org/jira/browse/KAFKA-6743 Project: Kafka Issue Type: Bug Components: core, tools Affects Versions: 0.11.0.2 Reporter: Alex Dunayevsky ConsumerPerformance fails to consume all messages on topics with large number of partitions due to a relatively short default polling loop timeout (1000 ms) that is not reachable and modifiable by the end user. Demo: Create a topic of 10 000 partitions, send a 50 000 000 of 100 byte records using kafka-producer-perf-test and consume them using kafka-consumer-perf-test (ConsumerPerformance). You will likely notice that the number of records returned by the kafka-consumer-perf-test is many times less than expected 50 000 000. This happens due to specific ConsumerPerformance implementation. As the result, in some rough cases it may take a long enough time to process/iterate through the records polled in batches, thus, the time may exceed the default hardcoded polling loop timeout and this is probably not what we want from this utility. We have two options: 1) Increasing polling loop timeout in ConsumerPerformance implementation. It defaults to 1000 ms and is hardcoded, thus cannot be changed but we could export it as an OPTIONAL kafka-consumer-perf-test parameter to enable it on a script level configuration and available to the end user. 2) Decreasing max.poll.records on a Consumer config level. This is not a fine option though since we do not want to touch the default settings. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-6724) ConsumerPerformance resets offsets on every startup
Alex Dunayevsky created KAFKA-6724: -- Summary: ConsumerPerformance resets offsets on every startup Key: KAFKA-6724 URL: https://issues.apache.org/jira/browse/KAFKA-6724 Project: Kafka Issue Type: Bug Components: core, tools Affects Versions: 0.11.0.1 Reporter: Alex Dunayevsky ConsumerPerformance used in kafka-consumer-perf-test.sh resets offsets for it's group on every startup. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-6597) Issues with Zookeeper and Kafka startup in Windows environment
Alex Dunayevsky created KAFKA-6597: -- Summary: Issues with Zookeeper and Kafka startup in Windows environment Key: KAFKA-6597 URL: https://issues.apache.org/jira/browse/KAFKA-6597 Project: Kafka Issue Type: Bug Affects Versions: 0.11.0.1, 0.10.0.1, 0.9.0.1 Reporter: Alex Dunayevsky Inability to start Zookeeper and Kafka services using standard Kafka .bat utilities for Windows environment *Problem 1:* CLASSPATH string not being formed correctly in bin\windows\kafka-run-class.bat. |bin\windows\zookeeper-server-start.bat config\zookeeper.properties *** ... class not found ...| *Possible working solution*: Assign CLASSPATH correctly in *bin\windows\kafka-run-class.bat:* |set CLASSPATH=%~dp0..\..\libs\*| *Problem 2:* In Kafka distro the *call :concat* may crash *bin\windows\kafka-run-class.bat* : |rem Classpath addition for release call :concat %BASE_DIR%\libs\*| *Possible working solution:* Comment or delete those lines of code. |rem Classpath addition for release rem call :concat %BASE_DIR%\libs\*| -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-6343) OOM as the result of creation of 5k topics
Alex Dunayevsky created KAFKA-6343: -- Summary: OOM as the result of creation of 5k topics Key: KAFKA-6343 URL: https://issues.apache.org/jira/browse/KAFKA-6343 Project: Kafka Issue Type: Bug Components: core Affects Versions: 0.10.1.1 Environment: RHEL 7, RAM 755GB Reporter: Alex Dunayevsky Create 5k topics *from the code* - quickly, without any delays. Wait until brokers will finish loading them. This will actually never happen, since all brokers will go down after approx 10-15 minutes or more, depending on the hardware. *Heap*: -Xmx/Xms: 5G, 10G, 50G, 256G... *Topology*: 3 brokers, 3 zk. *Code for 5k topic creation:* {code:java} package kafka import kafka.admin.AdminUtils import kafka.utils.{Logging, ZkUtils} object TestCreateTopics extends App with Logging { val zkConnect = "grid978:2185" var zkUtils = ZkUtils(zkConnect, 6000, 6000, isZkSecurityEnabled = false) for (topic <- 1 to 5000) { AdminUtils.createTopic( topic = s"${topic.toString}", partitions= 10, replicationFactor = 2, zkUtils = zkUtils ) logger.info(s"Created topic ${topic.toString}") } } {code} *OOM:* {code:java} java.io.IOException: Map failed at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:920) at kafka.log.AbstractIndex.(AbstractIndex.scala:61) at kafka.log.OffsetIndex.(OffsetIndex.scala:52) at kafka.log.LogSegment.(LogSegment.scala:67) at kafka.log.Log.loadSegments(Log.scala:255) at kafka.log.Log.(Log.scala:108) at kafka.log.LogManager.createLog(LogManager.scala:362) at kafka.cluster.Partition.getOrCreateReplica(Partition.scala:94) at kafka.cluster.Partition$$anonfun$4$$anonfun$apply$2.apply(Partition.scala:174) at kafka.cluster.Partition$$anonfun$4$$anonfun$apply$2.apply(Partition.scala:174) at scala.collection.mutable.HashSet.foreach(HashSet.scala:78) at kafka.cluster.Partition$$anonfun$4.apply(Partition.scala:174) at kafka.cluster.Partition$$anonfun$4.apply(Partition.scala:168) at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:234) at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:242) at kafka.cluster.Partition.makeLeader(Partition.scala:168) at kafka.server.ReplicaManager$$anonfun$makeLeaders$4.apply(ReplicaManager.scala:758) at kafka.server.ReplicaManager$$anonfun$makeLeaders$4.apply(ReplicaManager.scala:757) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99) at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230) at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40) at scala.collection.mutable.HashMap.foreach(HashMap.scala:99) at kafka.server.ReplicaManager.makeLeaders(ReplicaManager.scala:757) at kafka.server.ReplicaManager.becomeLeaderOrFollower(ReplicaManager.scala:703) at kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:148) at kafka.server.KafkaApis.handle(KafkaApis.scala:82) at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.OutOfMemoryError: Map failed at sun.nio.ch.FileChannelImpl.map0(Native Method) at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:917) ... 28 more {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)