[ 
https://issues.apache.org/jira/browse/KAFKA-7563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16819661#comment-16819661
 ] 

LiWei Wang commented on KAFKA-7563:
-----------------------------------

We have meet the same issues. 

*Log information:*
1. kstream log:
    [2019-04-02 
09:12:41,404]-[WARN]-[org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor.assign(StreamsPartitionAssignor.java:578)]-stream-thread
 [stream-localhost-StreamThread-1-consumer] Partition sensor_input-1 is not 
assigned to any tasks: \{0_0=[sensor_input-0]} Possible causes of a partition 
not getting assigned is that another topic defined in the topology has not been 
created when starting your streams application, resulting in no tasks created 
for this topology at all.
 
2. kafka log at the same time:
    [2019-04-02 09:12:41,649] INFO Topic creation 
Map(sensor-stream-kvDay1-changelog-0 -> ArrayBuffer(100)) 
(kafka.zk.AdminZkClient)
 
3. kstream error log:
    [2019-04-02 
09:12:43,399]-[ERROR]-[org.apache.kafka.streams.processor.internals.InternalTopicManager.validateTopicPartitions(InternalTopicManager.java:235)]-stream-thread
 [stream] Existing internal topic sensor-stream-kvDay1-changelog has invalid 
partitions: expected: 24; actual: 1. Use 'kafka.tools.StreamsResetter' tool to 
clean up invalid topics before processing.
 
Is there a good solution at the moment?

> Single broker sends incorrect metadata for topic partitions
> -----------------------------------------------------------
>
>                 Key: KAFKA-7563
>                 URL: https://issues.apache.org/jira/browse/KAFKA-7563
>             Project: Kafka
>          Issue Type: Bug
>    Affects Versions: 2.0.0
>            Reporter: Martin Kamp Jensen
>            Priority: Major
>         Attachments: kafka.log, zookeeper.log
>
>
> When starting our Kafka Streams application in a test setup with just one 
> Kafka broker we are seeing the following error roughly 1 out of 15 runs:
> {{StreamsException: Existing internal topic 
> alarm-message-streams-alarm-from-unknown-asset-changelog has invalid 
> partitions: expected: 32; actual: 25. Use 'kafka.tools.StreamsResetter' tool 
> to clean up invalid topics before processing.}}
> (Note: It is not always the same topic that causes the error.)
> When we see the error above the actual number of partitions varies (expected 
> is 32, actual is above 0 and below 32).
> Before each test run the Kafka broker is started without data (using 
> [https://hub.docker.com/r/wurstmeister/kafka/]).
> We have never seen this happen in non-test where we are running with 6 Kafka 
> brokers. However, we are running a significantly higher number of test runs 
> than deploys to non-test.
> After some investigation (including using AdminClient to describe the topics 
> when the Kafka Streams application got the StreamsException and confirming 
> that AdminClient also reports that a topic has the wrong number of 
> partitions!) we implemented the following workaround: When the Kafka Streams 
> application fails with the exception, we stop the application, stop the Kafka 
> broker, start the Kafka broker, and finally start the application. Then the 
> exception is not thrown. Of course this does not explain or fix the real 
> issue at hand but it is still important because we all hate flaky tests.
> Kafka and ZooKeeper log files from a run where the exception above occurred 
> and where applying the workaround described above enabled us to continue 
> without the exception are attached.
> This issue was created by request of Matthias J. Sax at 
> https://stackoverflow.com/questions/52943653/existing-internal-topic-has-invalid-partitions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to