[ https://issues.apache.org/jira/browse/KAFKA-9173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16988777#comment-16988777 ]
Oleg Muravskiy commented on KAFKA-9173: --------------------------------------- I understand that this is how it is *implemented*, but it is not how it is *documented*, or at least I don't see it anywhere in the Streams documentation, apart from the description of the (default and only one) {{DefaultPartitionGrouper}}. Is there a reason for such behaviour? Could I just implement an alternative {{PartitionGrouper}} that will assign each partition to a new task? > StreamsPartitionAssignor assigns partitions to only one worker > -------------------------------------------------------------- > > Key: KAFKA-9173 > URL: https://issues.apache.org/jira/browse/KAFKA-9173 > Project: Kafka > Issue Type: Bug > Components: streams > Affects Versions: 2.3.0, 2.2.1 > Reporter: Oleg Muravskiy > Priority: Major > Labels: user-experience > Attachments: StreamsPartitionAssignor.log > > > I'm running a distributed KafkaStreams application on 10 worker nodes, > subscribed to 21 topics with 10 partitions in each. I'm only using a > Processor interface, and a persistent state store. > However, only one worker gets assigned partitions, all other workers get > nothing. Restarting the application, or cleaning local state stores does not > help. StreamsPartitionAssignor migrates to other nodes, and eventually picks > up other node to assign partitions to, but still only one node. > It's difficult to figure out where to look for the signs of problems, I'm > attaching the log messages from the StreamsPartitionAssignor. Let me know > what else I could provide to help resolve this. > [^StreamsPartitionAssignor.log] -- This message was sent by Atlassian Jira (v8.3.4#803005)