[jira] [Comment Edited] (KAFKA-15648) QuorumControllerTest#testBootstrapZkMigrationRecord is flaky

2024-05-11 Thread sanghyeok An (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17845658#comment-17845658
 ] 

sanghyeok An edited comment on KAFKA-15648 at 5/12/24 2:00 AM:
---

Hey, [~davidarthur] ! 

How did you test it locally?

In my case, I execute test code repeatly, but the error which you encounter on 
your local never occurs. 

this is command lists which i executed.
 # gradle --parallel test
 # gradle --parallel metadata:test
 # gradle metadata:test

 

 


was (Author: JIRAUSER303328):
Hey, [~davidarthur] ! 

How did you test it locally? I execute test code repeatly, bit that error not 
occurs. 

this is command lists which i executed.
 # gradle --parallel test
 # gradle --parallel metadata:test
 # gradle metadata:test

 

 

> QuorumControllerTest#testBootstrapZkMigrationRecord is flaky
> 
>
> Key: KAFKA-15648
> URL: https://issues.apache.org/jira/browse/KAFKA-15648
> Project: Kafka
>  Issue Type: Bug
>  Components: controller, unit tests
>Reporter: David Arthur
>Priority: Minor
>  Labels: flaky-test, good-first-issue
>
> Noticed that this test failed on Jenkins with 
> {code}
> org.apache.kafka.server.fault.FaultHandlerException: fatalFaultHandler: 
> exception while completing controller activation: Should not have ZK 
> migrations enabled on a cluster running metadata.version 3.0-IV1
>   at 
> app//org.apache.kafka.controller.ActivationRecordsGenerator.recordsForNonEmptyLog(ActivationRecordsGenerator.java:154)
>   at 
> app//org.apache.kafka.controller.ActivationRecordsGenerator.generate(ActivationRecordsGenerator.java:229)
>   at 
> app//org.apache.kafka.controller.QuorumController$CompleteActivationEvent.generateRecordsAndResult(QuorumController.java:1237)
>   at 
> app//org.apache.kafka.controller.QuorumController$ControllerWriteEvent.run(QuorumController.java:784)
>   at 
> app//org.apache.kafka.queue.KafkaEventQueue$EventContext.run(KafkaEventQueue.java:127)
>   at 
> app//org.apache.kafka.queue.KafkaEventQueue$EventHandler.handleEvents(KafkaEventQueue.java:210)
>   at 
> app//org.apache.kafka.queue.KafkaEventQueue$EventHandler.run(KafkaEventQueue.java:181)
>   at java.base@11.0.16.1/java.lang.Thread.run(Thread.java:829)
> Caused by: java.lang.RuntimeException: Should not have ZK migrations enabled 
> on a cluster running metadata.version 3.0-IV1
>   ... 8 more
> {code}
> I think this exception is a red herring (it's expected from one of the 
> negative test cases). When trying to reproduce this issue locally, I do 
> occasionally see the following type of failure
> {code}
> [2023-10-19 13:42:10,091] INFO Elected new leader: 
> LeaderAndEpoch(leaderId=OptionalInt[0], epoch=1). 
> (org.apache.kafka.metalog.LocalLogManager$SharedLogData:300)
> [2023-10-19 13:42:10,091] DEBUG 
> append(batch=LeaderChangeBatch(newLeader=LeaderAndEpoch(leaderId=OptionalInt[0],
>  epoch=1)), nextEndOffset=0) 
> (org.apache.kafka.metalog.LocalLogManager$SharedLogData:276)
> [2023-10-19 13:42:10,091] DEBUG [LocalLogManager 0] Node 0: running log 
> check. (org.apache.kafka.metalog.LocalLogManager:536)
> [2023-10-19 13:42:10,091] DEBUG [LocalLogManager 0] initialized local log 
> manager for node 0 (org.apache.kafka.metalog.LocalLogManager:685)
> [2023-10-19 13:42:10,091] DEBUG [QuorumController id=0] Creating in-memory 
> snapshot -1 (org.apache.kafka.timeline.SnapshotRegistry:203)
> [2023-10-19 13:42:10,091] INFO [QuorumController id=0] Creating new 
> QuorumController with clusterId 6xRUXZ_kQ1GfuaHK42AS9Q. ZK migration mode is 
> enabled. (org.apache.kafka.controller.QuorumController:1912)
> [2023-10-19 13:42:10,092] INFO [LocalLogManager 0] Node 0: registered 
> MetaLogListener 1082153924 (org.apache.kafka.metalog.LocalLogManager:703)
> [2023-10-19 13:42:10,092] DEBUG [LocalLogManager 0] Node 0: running log 
> check. (org.apache.kafka.metalog.LocalLogManager:536)
> [2023-10-19 13:42:10,092] DEBUG [LocalLogManager 0] Node 0: Executing 
> handleLeaderChange LeaderAndEpoch(leaderId=OptionalInt[0], epoch=1) 
> (org.apache.kafka.metalog.LocalLogManager:578)
> [2023-10-19 13:42:10,092] DEBUG [QuorumController id=0] Executing 
> handleLeaderChange[1]. (org.apache.kafka.controller.QuorumController:577)
> [2023-10-19 13:42:10,092] INFO [QuorumController id=0] In the new epoch 1, 
> the leader is (none). (org.apache.kafka.controller.QuorumController:1179)
> [2023-10-19 13:42:10,092] DEBUG [QuorumController id=0] Processed 
> handleLeaderChange[1] in 50 us 
> (org.apache.kafka.controller.QuorumController:510)
> [2023-10-19 13:42:10,092] DEBUG [QuorumController id=0] Executing 
> handleLeaderChange[1]. (org.apache.kafka.controller.QuorumController:577)
> [2023-10-19 13:42:10,092] INFO [QuorumController id=0] Becoming the 

[jira] [Commented] (KAFKA-15648) QuorumControllerTest#testBootstrapZkMigrationRecord is flaky

2024-05-11 Thread sanghyeok An (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17845658#comment-17845658
 ] 

sanghyeok An commented on KAFKA-15648:
--

Hey, [~davidarthur] ! 

How did you test it locally? I execute test code repeatly, bit that error not 
occurs. 

this is command lists which i executed.
 # gradle --parallel test
 # gradle --parallel metadata:test
 # gradle metadata:test

 

 

> QuorumControllerTest#testBootstrapZkMigrationRecord is flaky
> 
>
> Key: KAFKA-15648
> URL: https://issues.apache.org/jira/browse/KAFKA-15648
> Project: Kafka
>  Issue Type: Bug
>  Components: controller, unit tests
>Reporter: David Arthur
>Priority: Minor
>  Labels: flaky-test, good-first-issue
>
> Noticed that this test failed on Jenkins with 
> {code}
> org.apache.kafka.server.fault.FaultHandlerException: fatalFaultHandler: 
> exception while completing controller activation: Should not have ZK 
> migrations enabled on a cluster running metadata.version 3.0-IV1
>   at 
> app//org.apache.kafka.controller.ActivationRecordsGenerator.recordsForNonEmptyLog(ActivationRecordsGenerator.java:154)
>   at 
> app//org.apache.kafka.controller.ActivationRecordsGenerator.generate(ActivationRecordsGenerator.java:229)
>   at 
> app//org.apache.kafka.controller.QuorumController$CompleteActivationEvent.generateRecordsAndResult(QuorumController.java:1237)
>   at 
> app//org.apache.kafka.controller.QuorumController$ControllerWriteEvent.run(QuorumController.java:784)
>   at 
> app//org.apache.kafka.queue.KafkaEventQueue$EventContext.run(KafkaEventQueue.java:127)
>   at 
> app//org.apache.kafka.queue.KafkaEventQueue$EventHandler.handleEvents(KafkaEventQueue.java:210)
>   at 
> app//org.apache.kafka.queue.KafkaEventQueue$EventHandler.run(KafkaEventQueue.java:181)
>   at java.base@11.0.16.1/java.lang.Thread.run(Thread.java:829)
> Caused by: java.lang.RuntimeException: Should not have ZK migrations enabled 
> on a cluster running metadata.version 3.0-IV1
>   ... 8 more
> {code}
> I think this exception is a red herring (it's expected from one of the 
> negative test cases). When trying to reproduce this issue locally, I do 
> occasionally see the following type of failure
> {code}
> [2023-10-19 13:42:10,091] INFO Elected new leader: 
> LeaderAndEpoch(leaderId=OptionalInt[0], epoch=1). 
> (org.apache.kafka.metalog.LocalLogManager$SharedLogData:300)
> [2023-10-19 13:42:10,091] DEBUG 
> append(batch=LeaderChangeBatch(newLeader=LeaderAndEpoch(leaderId=OptionalInt[0],
>  epoch=1)), nextEndOffset=0) 
> (org.apache.kafka.metalog.LocalLogManager$SharedLogData:276)
> [2023-10-19 13:42:10,091] DEBUG [LocalLogManager 0] Node 0: running log 
> check. (org.apache.kafka.metalog.LocalLogManager:536)
> [2023-10-19 13:42:10,091] DEBUG [LocalLogManager 0] initialized local log 
> manager for node 0 (org.apache.kafka.metalog.LocalLogManager:685)
> [2023-10-19 13:42:10,091] DEBUG [QuorumController id=0] Creating in-memory 
> snapshot -1 (org.apache.kafka.timeline.SnapshotRegistry:203)
> [2023-10-19 13:42:10,091] INFO [QuorumController id=0] Creating new 
> QuorumController with clusterId 6xRUXZ_kQ1GfuaHK42AS9Q. ZK migration mode is 
> enabled. (org.apache.kafka.controller.QuorumController:1912)
> [2023-10-19 13:42:10,092] INFO [LocalLogManager 0] Node 0: registered 
> MetaLogListener 1082153924 (org.apache.kafka.metalog.LocalLogManager:703)
> [2023-10-19 13:42:10,092] DEBUG [LocalLogManager 0] Node 0: running log 
> check. (org.apache.kafka.metalog.LocalLogManager:536)
> [2023-10-19 13:42:10,092] DEBUG [LocalLogManager 0] Node 0: Executing 
> handleLeaderChange LeaderAndEpoch(leaderId=OptionalInt[0], epoch=1) 
> (org.apache.kafka.metalog.LocalLogManager:578)
> [2023-10-19 13:42:10,092] DEBUG [QuorumController id=0] Executing 
> handleLeaderChange[1]. (org.apache.kafka.controller.QuorumController:577)
> [2023-10-19 13:42:10,092] INFO [QuorumController id=0] In the new epoch 1, 
> the leader is (none). (org.apache.kafka.controller.QuorumController:1179)
> [2023-10-19 13:42:10,092] DEBUG [QuorumController id=0] Processed 
> handleLeaderChange[1] in 50 us 
> (org.apache.kafka.controller.QuorumController:510)
> [2023-10-19 13:42:10,092] DEBUG [QuorumController id=0] Executing 
> handleLeaderChange[1]. (org.apache.kafka.controller.QuorumController:577)
> [2023-10-19 13:42:10,092] INFO [QuorumController id=0] Becoming the active 
> controller at epoch 1, next write offset 1. 
> (org.apache.kafka.controller.QuorumController:1175)
> [2023-10-19 13:42:10,092] DEBUG [QuorumController id=0] Processed 
> handleLeaderChange[1] in 77 us 
> (org.apache.kafka.controller.QuorumController:510)
> [2023-10-19 13:42:10,092] WARN [QuorumController id=0] Performing controller 
> activation. The metadata 

[jira] [Commented] (KAFKA-16619) Unnecessary controller warning : "Loaded ZK migration state of NONE"

2024-05-11 Thread sanghyeok An (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17845652#comment-17845652
 ] 

sanghyeok An commented on KAFKA-16619:
--

Aye! thanks a lot! 

> Unnecessary controller warning : "Loaded ZK migration state of NONE"
> 
>
> Key: KAFKA-16619
> URL: https://issues.apache.org/jira/browse/KAFKA-16619
> Project: Kafka
>  Issue Type: Improvement
>  Components: controller
>Affects Versions: 3.6.2
>Reporter: F Méthot
>Priority: Trivial
>  Labels: good-first-issue
>
> When we launch a fresh cluster of Kafka and Kraft Controller, no zookeeper 
> involved.
> We get this warning in the controller log:
> [2024-04-15 03:44:33,881] WARN [QuorumController id=3] Performing controller 
> activation. Loaded ZK migration state of NONE. 
> (org.apache.kafka.controller.QuorumController)
>  
> Our project has no business with Zookeeper, seeing this message prompted us 
> to investigate and spend time looking up this warning to find an explanation.
> We have that setting
> {_}zookeeper.metadata.migration.enable{_}=false
> and we still get that warning.
> In future version, to avoid further confusion this message should not be 
> showed when zookeeper is not involved at all .



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16619) Unnecessary controller warning : "Loaded ZK migration state of NONE"

2024-05-11 Thread sanghyeok An (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17845618#comment-17845618
 ] 

sanghyeok An commented on KAFKA-16619:
--

Hey, [~davidarthur] !

May i try to take this issue? 

> Unnecessary controller warning : "Loaded ZK migration state of NONE"
> 
>
> Key: KAFKA-16619
> URL: https://issues.apache.org/jira/browse/KAFKA-16619
> Project: Kafka
>  Issue Type: Improvement
>  Components: controller
>Affects Versions: 3.6.2
>Reporter: F Méthot
>Priority: Trivial
>  Labels: good-first-issue
>
> When we launch a fresh cluster of Kafka and Kraft Controller, no zookeeper 
> involved.
> We get this warning in the controller log:
> [2024-04-15 03:44:33,881] WARN [QuorumController id=3] Performing controller 
> activation. Loaded ZK migration state of NONE. 
> (org.apache.kafka.controller.QuorumController)
>  
> Our project has no business with Zookeeper, seeing this message prompted us 
> to investigate and spend time looking up this warning to find an explanation.
> We have that setting
> {_}zookeeper.metadata.migration.enable{_}=false
> and we still get that warning.
> In future version, to avoid further confusion this message should not be 
> showed when zookeeper is not involved at all .



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16670) KIP-848 : Consumer will not receive assignment forever because of concurrent issue.

2024-05-11 Thread sanghyeok An (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17845614#comment-17845614
 ] 

sanghyeok An commented on KAFKA-16670:
--

Hey, [~lianetm] ,

I found root cause, but you already fixed cause with this 
PR([https://github.com/apache/kafka/pull/15698)] , right? 

In apache/kafka:3.7.0, that PR does not included.

When developer can use this? 

> KIP-848 : Consumer will not receive assignment forever because of concurrent 
> issue.
> ---
>
> Key: KAFKA-16670
> URL: https://issues.apache.org/jira/browse/KAFKA-16670
> Project: Kafka
>  Issue Type: Bug
>Reporter: sanghyeok An
>Priority: Major
> Attachments: image-2024-05-07-08-34-06-855.png, 
> image-2024-05-07-08-36-22-983.png, image-2024-05-07-08-36-40-656.png, 
> image-2024-05-07-08-38-27-753.png
>
>
> *Related Code*
>  * Consumer get assignment Successfully :
>  ** 
> [https://github.com/chickenchickenlove/new-consumer-error/blob/8c1d74db1ec60350c28f5ed25f595559180dc603/src/test/java/com/example/MyTest.java#L35-L57]
>  * Consumer get be stuck Forever because of concurrent issue:
>  ** 
> [https://github.com/chickenchickenlove/new-consumer-error/blob/8c1d74db1ec60350c28f5ed25f595559180dc603/src/test/java/com/example/MyTest.java#L61-L79]
>  
> *Unexpected behaviour*
>  * 
> Broker is sufficiently slow.
>  * When a KafkaConsumer is created and immediately subscribes to a topic
> If both conditions are met, {{Consumer}} can potentially never receive 
> {{TopicPartition}} assignments and become stuck indefinitely.
> In case of new broker and new consumer, when consumer are created, consumer 
> background thread send a request to broker. (I guess groupCoordinator 
> Heartbeat request). In that time, if broker does not load metadata from 
> {{{}__consumer_offset{}}}, broker will start to schedule load metadata. After 
> broker load metadata completely, consumer background thread think 'this 
> broker is valid group coordinator'.
> However, consumer can send {{subscribe}} request to broker before {{broker}} 
> reply about {{{}groupCoordinator HeartBeat Request{}}}. In that case, 
> consumer seems to be stuck.
> If both conditions are met, the {{Consumer}} can potentially never receive 
> {{TopicPartition}} assignments and may become indefinitely stuck. In the case 
> of a new {{broker}} and new {{{}consumer{}}}, when the consumer is created, 
> {{consumer background thread}} start to send a request to the broker. (I 
> believe this is a {{{}GroupCoordinator Heartbeat request{}}}) During this 
> time, if the {{broker}} has not yet loaded metadata from 
> {{{}__consumer_offsets{}}}, it will begin to schedule metadata loading. Once 
> the broker has completely loaded the metadata, the {{consumer background 
> thread}} recognizes this broker as a valid group coordinator. However, there 
> is a possibility that the {{consumer}} can send a {{subscribe request}} to 
> the {{broker}} before the {{broker}} has replied to the {{{}GroupCoordinator 
> Heartbeat Request{}}}. In such a scenario, the {{consumer}} appears to be 
> stuck.
>  
> You can check this scenario, in the 
> {{{}src/test/java/com/example/MyTest#should_fail_because_consumer_try_to_poll_before_background_thread_get_valid_coordinator{}}}.
>  If there is no sleep time to wait {{{}GroupCoordinator Heartbeat 
> Request{}}}, {{consumer}} will be always stuck. If there is a little sleep 
> time, {{consumer}} will always receive assignment.
>  
> README : 
> [https://github.com/chickenchickenlove/new-consumer-error/blob/main/README.md]
>  
> In my case, consumer get assignment in `docker-compose` : it means not enough 
> slow. 
> However, consumer cannot get assignmet in `testcontainers` without little 
> waiting time. : it means enough slow to cause concurrent issue. 
> `testconatiners` is docker in docker, thus `testcontainers` will be slower 
> than `docker-compose`. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16670) KIP-848 : Consumer will not receive assignment forever because of concurrent issue.

2024-05-06 Thread sanghyeok An (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17844063#comment-17844063
 ] 

sanghyeok An commented on KAFKA-16670:
--

Hi, [~lianetm]! thanks for your comments. your description is clear, and i've 
understand it! 

 

So based on those expectations and back to your example, we don't need to wait 
before calling subscribe (that's handled internally by the 
HeartbeatRequestManager as described above). I wonder if it's the fact that in 
the failed case you're polling 10 times only (vs. 100 times in the successful 
case)?? In order to receive records, we do need to make sure that we are 
calling poll after the assignment has been received (so the consumer issues a 
fetch request for the partitions assigned). Note that even when you poll for 1s 
in your test, a poll that happens before the assignment has been received, will 
block for 1s but it's doomed to return empty, because it is not waiting for 
records from the topics you're interested in (no partitions assigned yet). 
Could you make sure that the test is calling poll after the assignment has been 
received? (I would suggest just polling while true for a certain amount of 
time, no sleeping after the subscribe needed).

 

Sorry, i make you confused. 

I intended to try it 100 times for both failure and success cases, but the code 
was set to attempt only 10 times in failure case. Anyway, as you suggested, I 
proceeded by logging the {{poll()}} attempts. 

!image-2024-05-07-08-34-06-855.png|width=721,height=269!
 # The consumer calls {{poll()}} up to 1000 times.
 # consumer will leave log ("i : " + i) each by each try. 
 # if consumer success to poll not empty record, consumer do countdown of 
countDownLatch. and then, we can check whether countDownLatch is 0. 

!image-2024-05-07-08-36-40-656.png|width=848,height=315!

I waited until it was called 430 times. it means consumer wait for assignment 
during about 430 sec. 

However, consumer could not get their assignment yet. 

!image-2024-05-07-08-38-27-753.png|width=1654,height=289!

However, after receiving the initial FindCoordinator Request, the broker does 
not perform any action. Please see the log above.

Broker don't have any log after 2024-05-06 23:29:27, but by the time of the 
430th attempt, it was already 2024-05-06 23:39:00. 

 

Anyway, it seems that at least one of the consumer or the broker has a 
potential issue.

What do you think? 

> KIP-848 : Consumer will not receive assignment forever because of concurrent 
> issue.
> ---
>
> Key: KAFKA-16670
> URL: https://issues.apache.org/jira/browse/KAFKA-16670
> Project: Kafka
>  Issue Type: Bug
>Reporter: sanghyeok An
>Priority: Major
> Attachments: image-2024-05-07-08-34-06-855.png, 
> image-2024-05-07-08-36-22-983.png, image-2024-05-07-08-36-40-656.png, 
> image-2024-05-07-08-38-27-753.png
>
>
> *Related Code*
>  * Consumer get assignment Successfully :
>  ** 
> [https://github.com/chickenchickenlove/new-consumer-error/blob/8c1d74db1ec60350c28f5ed25f595559180dc603/src/test/java/com/example/MyTest.java#L35-L57]
>  * Consumer get be stuck Forever because of concurrent issue:
>  ** 
> [https://github.com/chickenchickenlove/new-consumer-error/blob/8c1d74db1ec60350c28f5ed25f595559180dc603/src/test/java/com/example/MyTest.java#L61-L79]
>  
> *Unexpected behaviour*
>  * 
> Broker is sufficiently slow.
>  * When a KafkaConsumer is created and immediately subscribes to a topic
> If both conditions are met, {{Consumer}} can potentially never receive 
> {{TopicPartition}} assignments and become stuck indefinitely.
> In case of new broker and new consumer, when consumer are created, consumer 
> background thread send a request to broker. (I guess groupCoordinator 
> Heartbeat request). In that time, if broker does not load metadata from 
> {{{}__consumer_offset{}}}, broker will start to schedule load metadata. After 
> broker load metadata completely, consumer background thread think 'this 
> broker is valid group coordinator'.
> However, consumer can send {{subscribe}} request to broker before {{broker}} 
> reply about {{{}groupCoordinator HeartBeat Request{}}}. In that case, 
> consumer seems to be stuck.
> If both conditions are met, the {{Consumer}} can potentially never receive 
> {{TopicPartition}} assignments and may become indefinitely stuck. In the case 
> of a new {{broker}} and new {{{}consumer{}}}, when the consumer is created, 
> {{consumer background thread}} start to send a request to the broker. (I 
> believe this is a {{{}GroupCoordinator Heartbeat request{}}}) During this 
> time, if the {{broker}} has not yet loaded metadata from 
> {{{}__consumer_offsets{}}}, it will begin to schedule metadata loading. Once 
> the broker has completely loaded the metadata, 

[jira] [Updated] (KAFKA-16670) KIP-848 : Consumer will not receive assignment forever because of concurrent issue.

2024-05-06 Thread sanghyeok An (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sanghyeok An updated KAFKA-16670:
-
Attachment: image-2024-05-07-08-38-27-753.png

> KIP-848 : Consumer will not receive assignment forever because of concurrent 
> issue.
> ---
>
> Key: KAFKA-16670
> URL: https://issues.apache.org/jira/browse/KAFKA-16670
> Project: Kafka
>  Issue Type: Bug
>Reporter: sanghyeok An
>Priority: Major
> Attachments: image-2024-05-07-08-34-06-855.png, 
> image-2024-05-07-08-36-22-983.png, image-2024-05-07-08-36-40-656.png, 
> image-2024-05-07-08-38-27-753.png
>
>
> *Related Code*
>  * Consumer get assignment Successfully :
>  ** 
> [https://github.com/chickenchickenlove/new-consumer-error/blob/8c1d74db1ec60350c28f5ed25f595559180dc603/src/test/java/com/example/MyTest.java#L35-L57]
>  * Consumer get be stuck Forever because of concurrent issue:
>  ** 
> [https://github.com/chickenchickenlove/new-consumer-error/blob/8c1d74db1ec60350c28f5ed25f595559180dc603/src/test/java/com/example/MyTest.java#L61-L79]
>  
> *Unexpected behaviour*
>  * 
> Broker is sufficiently slow.
>  * When a KafkaConsumer is created and immediately subscribes to a topic
> If both conditions are met, {{Consumer}} can potentially never receive 
> {{TopicPartition}} assignments and become stuck indefinitely.
> In case of new broker and new consumer, when consumer are created, consumer 
> background thread send a request to broker. (I guess groupCoordinator 
> Heartbeat request). In that time, if broker does not load metadata from 
> {{{}__consumer_offset{}}}, broker will start to schedule load metadata. After 
> broker load metadata completely, consumer background thread think 'this 
> broker is valid group coordinator'.
> However, consumer can send {{subscribe}} request to broker before {{broker}} 
> reply about {{{}groupCoordinator HeartBeat Request{}}}. In that case, 
> consumer seems to be stuck.
> If both conditions are met, the {{Consumer}} can potentially never receive 
> {{TopicPartition}} assignments and may become indefinitely stuck. In the case 
> of a new {{broker}} and new {{{}consumer{}}}, when the consumer is created, 
> {{consumer background thread}} start to send a request to the broker. (I 
> believe this is a {{{}GroupCoordinator Heartbeat request{}}}) During this 
> time, if the {{broker}} has not yet loaded metadata from 
> {{{}__consumer_offsets{}}}, it will begin to schedule metadata loading. Once 
> the broker has completely loaded the metadata, the {{consumer background 
> thread}} recognizes this broker as a valid group coordinator. However, there 
> is a possibility that the {{consumer}} can send a {{subscribe request}} to 
> the {{broker}} before the {{broker}} has replied to the {{{}GroupCoordinator 
> Heartbeat Request{}}}. In such a scenario, the {{consumer}} appears to be 
> stuck.
>  
> You can check this scenario, in the 
> {{{}src/test/java/com/example/MyTest#should_fail_because_consumer_try_to_poll_before_background_thread_get_valid_coordinator{}}}.
>  If there is no sleep time to wait {{{}GroupCoordinator Heartbeat 
> Request{}}}, {{consumer}} will be always stuck. If there is a little sleep 
> time, {{consumer}} will always receive assignment.
>  
> README : 
> [https://github.com/chickenchickenlove/new-consumer-error/blob/main/README.md]
>  
> In my case, consumer get assignment in `docker-compose` : it means not enough 
> slow. 
> However, consumer cannot get assignmet in `testcontainers` without little 
> waiting time. : it means enough slow to cause concurrent issue. 
> `testconatiners` is docker in docker, thus `testcontainers` will be slower 
> than `docker-compose`. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16670) KIP-848 : Consumer will not receive assignment forever because of concurrent issue.

2024-05-06 Thread sanghyeok An (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sanghyeok An updated KAFKA-16670:
-
Attachment: image-2024-05-07-08-36-22-983.png

> KIP-848 : Consumer will not receive assignment forever because of concurrent 
> issue.
> ---
>
> Key: KAFKA-16670
> URL: https://issues.apache.org/jira/browse/KAFKA-16670
> Project: Kafka
>  Issue Type: Bug
>Reporter: sanghyeok An
>Priority: Major
> Attachments: image-2024-05-07-08-34-06-855.png, 
> image-2024-05-07-08-36-22-983.png, image-2024-05-07-08-36-40-656.png
>
>
> *Related Code*
>  * Consumer get assignment Successfully :
>  ** 
> [https://github.com/chickenchickenlove/new-consumer-error/blob/8c1d74db1ec60350c28f5ed25f595559180dc603/src/test/java/com/example/MyTest.java#L35-L57]
>  * Consumer get be stuck Forever because of concurrent issue:
>  ** 
> [https://github.com/chickenchickenlove/new-consumer-error/blob/8c1d74db1ec60350c28f5ed25f595559180dc603/src/test/java/com/example/MyTest.java#L61-L79]
>  
> *Unexpected behaviour*
>  * 
> Broker is sufficiently slow.
>  * When a KafkaConsumer is created and immediately subscribes to a topic
> If both conditions are met, {{Consumer}} can potentially never receive 
> {{TopicPartition}} assignments and become stuck indefinitely.
> In case of new broker and new consumer, when consumer are created, consumer 
> background thread send a request to broker. (I guess groupCoordinator 
> Heartbeat request). In that time, if broker does not load metadata from 
> {{{}__consumer_offset{}}}, broker will start to schedule load metadata. After 
> broker load metadata completely, consumer background thread think 'this 
> broker is valid group coordinator'.
> However, consumer can send {{subscribe}} request to broker before {{broker}} 
> reply about {{{}groupCoordinator HeartBeat Request{}}}. In that case, 
> consumer seems to be stuck.
> If both conditions are met, the {{Consumer}} can potentially never receive 
> {{TopicPartition}} assignments and may become indefinitely stuck. In the case 
> of a new {{broker}} and new {{{}consumer{}}}, when the consumer is created, 
> {{consumer background thread}} start to send a request to the broker. (I 
> believe this is a {{{}GroupCoordinator Heartbeat request{}}}) During this 
> time, if the {{broker}} has not yet loaded metadata from 
> {{{}__consumer_offsets{}}}, it will begin to schedule metadata loading. Once 
> the broker has completely loaded the metadata, the {{consumer background 
> thread}} recognizes this broker as a valid group coordinator. However, there 
> is a possibility that the {{consumer}} can send a {{subscribe request}} to 
> the {{broker}} before the {{broker}} has replied to the {{{}GroupCoordinator 
> Heartbeat Request{}}}. In such a scenario, the {{consumer}} appears to be 
> stuck.
>  
> You can check this scenario, in the 
> {{{}src/test/java/com/example/MyTest#should_fail_because_consumer_try_to_poll_before_background_thread_get_valid_coordinator{}}}.
>  If there is no sleep time to wait {{{}GroupCoordinator Heartbeat 
> Request{}}}, {{consumer}} will be always stuck. If there is a little sleep 
> time, {{consumer}} will always receive assignment.
>  
> README : 
> [https://github.com/chickenchickenlove/new-consumer-error/blob/main/README.md]
>  
> In my case, consumer get assignment in `docker-compose` : it means not enough 
> slow. 
> However, consumer cannot get assignmet in `testcontainers` without little 
> waiting time. : it means enough slow to cause concurrent issue. 
> `testconatiners` is docker in docker, thus `testcontainers` will be slower 
> than `docker-compose`. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16670) KIP-848 : Consumer will not receive assignment forever because of concurrent issue.

2024-05-06 Thread sanghyeok An (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sanghyeok An updated KAFKA-16670:
-
Attachment: image-2024-05-07-08-36-40-656.png

> KIP-848 : Consumer will not receive assignment forever because of concurrent 
> issue.
> ---
>
> Key: KAFKA-16670
> URL: https://issues.apache.org/jira/browse/KAFKA-16670
> Project: Kafka
>  Issue Type: Bug
>Reporter: sanghyeok An
>Priority: Major
> Attachments: image-2024-05-07-08-34-06-855.png, 
> image-2024-05-07-08-36-22-983.png, image-2024-05-07-08-36-40-656.png
>
>
> *Related Code*
>  * Consumer get assignment Successfully :
>  ** 
> [https://github.com/chickenchickenlove/new-consumer-error/blob/8c1d74db1ec60350c28f5ed25f595559180dc603/src/test/java/com/example/MyTest.java#L35-L57]
>  * Consumer get be stuck Forever because of concurrent issue:
>  ** 
> [https://github.com/chickenchickenlove/new-consumer-error/blob/8c1d74db1ec60350c28f5ed25f595559180dc603/src/test/java/com/example/MyTest.java#L61-L79]
>  
> *Unexpected behaviour*
>  * 
> Broker is sufficiently slow.
>  * When a KafkaConsumer is created and immediately subscribes to a topic
> If both conditions are met, {{Consumer}} can potentially never receive 
> {{TopicPartition}} assignments and become stuck indefinitely.
> In case of new broker and new consumer, when consumer are created, consumer 
> background thread send a request to broker. (I guess groupCoordinator 
> Heartbeat request). In that time, if broker does not load metadata from 
> {{{}__consumer_offset{}}}, broker will start to schedule load metadata. After 
> broker load metadata completely, consumer background thread think 'this 
> broker is valid group coordinator'.
> However, consumer can send {{subscribe}} request to broker before {{broker}} 
> reply about {{{}groupCoordinator HeartBeat Request{}}}. In that case, 
> consumer seems to be stuck.
> If both conditions are met, the {{Consumer}} can potentially never receive 
> {{TopicPartition}} assignments and may become indefinitely stuck. In the case 
> of a new {{broker}} and new {{{}consumer{}}}, when the consumer is created, 
> {{consumer background thread}} start to send a request to the broker. (I 
> believe this is a {{{}GroupCoordinator Heartbeat request{}}}) During this 
> time, if the {{broker}} has not yet loaded metadata from 
> {{{}__consumer_offsets{}}}, it will begin to schedule metadata loading. Once 
> the broker has completely loaded the metadata, the {{consumer background 
> thread}} recognizes this broker as a valid group coordinator. However, there 
> is a possibility that the {{consumer}} can send a {{subscribe request}} to 
> the {{broker}} before the {{broker}} has replied to the {{{}GroupCoordinator 
> Heartbeat Request{}}}. In such a scenario, the {{consumer}} appears to be 
> stuck.
>  
> You can check this scenario, in the 
> {{{}src/test/java/com/example/MyTest#should_fail_because_consumer_try_to_poll_before_background_thread_get_valid_coordinator{}}}.
>  If there is no sleep time to wait {{{}GroupCoordinator Heartbeat 
> Request{}}}, {{consumer}} will be always stuck. If there is a little sleep 
> time, {{consumer}} will always receive assignment.
>  
> README : 
> [https://github.com/chickenchickenlove/new-consumer-error/blob/main/README.md]
>  
> In my case, consumer get assignment in `docker-compose` : it means not enough 
> slow. 
> However, consumer cannot get assignmet in `testcontainers` without little 
> waiting time. : it means enough slow to cause concurrent issue. 
> `testconatiners` is docker in docker, thus `testcontainers` will be slower 
> than `docker-compose`. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16670) KIP-848 : Consumer will not receive assignment forever because of concurrent issue.

2024-05-06 Thread sanghyeok An (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sanghyeok An updated KAFKA-16670:
-
Attachment: image-2024-05-07-08-34-06-855.png

> KIP-848 : Consumer will not receive assignment forever because of concurrent 
> issue.
> ---
>
> Key: KAFKA-16670
> URL: https://issues.apache.org/jira/browse/KAFKA-16670
> Project: Kafka
>  Issue Type: Bug
>Reporter: sanghyeok An
>Priority: Major
> Attachments: image-2024-05-07-08-34-06-855.png
>
>
> *Related Code*
>  * Consumer get assignment Successfully :
>  ** 
> [https://github.com/chickenchickenlove/new-consumer-error/blob/8c1d74db1ec60350c28f5ed25f595559180dc603/src/test/java/com/example/MyTest.java#L35-L57]
>  * Consumer get be stuck Forever because of concurrent issue:
>  ** 
> [https://github.com/chickenchickenlove/new-consumer-error/blob/8c1d74db1ec60350c28f5ed25f595559180dc603/src/test/java/com/example/MyTest.java#L61-L79]
>  
> *Unexpected behaviour*
>  * 
> Broker is sufficiently slow.
>  * When a KafkaConsumer is created and immediately subscribes to a topic
> If both conditions are met, {{Consumer}} can potentially never receive 
> {{TopicPartition}} assignments and become stuck indefinitely.
> In case of new broker and new consumer, when consumer are created, consumer 
> background thread send a request to broker. (I guess groupCoordinator 
> Heartbeat request). In that time, if broker does not load metadata from 
> {{{}__consumer_offset{}}}, broker will start to schedule load metadata. After 
> broker load metadata completely, consumer background thread think 'this 
> broker is valid group coordinator'.
> However, consumer can send {{subscribe}} request to broker before {{broker}} 
> reply about {{{}groupCoordinator HeartBeat Request{}}}. In that case, 
> consumer seems to be stuck.
> If both conditions are met, the {{Consumer}} can potentially never receive 
> {{TopicPartition}} assignments and may become indefinitely stuck. In the case 
> of a new {{broker}} and new {{{}consumer{}}}, when the consumer is created, 
> {{consumer background thread}} start to send a request to the broker. (I 
> believe this is a {{{}GroupCoordinator Heartbeat request{}}}) During this 
> time, if the {{broker}} has not yet loaded metadata from 
> {{{}__consumer_offsets{}}}, it will begin to schedule metadata loading. Once 
> the broker has completely loaded the metadata, the {{consumer background 
> thread}} recognizes this broker as a valid group coordinator. However, there 
> is a possibility that the {{consumer}} can send a {{subscribe request}} to 
> the {{broker}} before the {{broker}} has replied to the {{{}GroupCoordinator 
> Heartbeat Request{}}}. In such a scenario, the {{consumer}} appears to be 
> stuck.
>  
> You can check this scenario, in the 
> {{{}src/test/java/com/example/MyTest#should_fail_because_consumer_try_to_poll_before_background_thread_get_valid_coordinator{}}}.
>  If there is no sleep time to wait {{{}GroupCoordinator Heartbeat 
> Request{}}}, {{consumer}} will be always stuck. If there is a little sleep 
> time, {{consumer}} will always receive assignment.
>  
> README : 
> [https://github.com/chickenchickenlove/new-consumer-error/blob/main/README.md]
>  
> In my case, consumer get assignment in `docker-compose` : it means not enough 
> slow. 
> However, consumer cannot get assignmet in `testcontainers` without little 
> waiting time. : it means enough slow to cause concurrent issue. 
> `testconatiners` is docker in docker, thus `testcontainers` will be slower 
> than `docker-compose`. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16670) KIP-848 : Consumer will not receive assignment forever because of concurrent issue.

2024-05-05 Thread sanghyeok An (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sanghyeok An updated KAFKA-16670:
-
Description: 
*Related Code*
 * Consumer get assignment Successfully :
 ** 
[https://github.com/chickenchickenlove/new-consumer-error/blob/8c1d74db1ec60350c28f5ed25f595559180dc603/src/test/java/com/example/MyTest.java#L35-L57]
 * Consumer get be stuck Forever because of concurrent issue:
 ** 
[https://github.com/chickenchickenlove/new-consumer-error/blob/8c1d74db1ec60350c28f5ed25f595559180dc603/src/test/java/com/example/MyTest.java#L61-L79]

 

*Unexpected behaviour*
 * 
Broker is sufficiently slow.
 * When a KafkaConsumer is created and immediately subscribes to a topic

If both conditions are met, {{Consumer}} can potentially never receive 
{{TopicPartition}} assignments and become stuck indefinitely.

In case of new broker and new consumer, when consumer are created, consumer 
background thread send a request to broker. (I guess groupCoordinator Heartbeat 
request). In that time, if broker does not load metadata from 
{{{}__consumer_offset{}}}, broker will start to schedule load metadata. After 
broker load metadata completely, consumer background thread think 'this broker 
is valid group coordinator'.

However, consumer can send {{subscribe}} request to broker before {{broker}} 
reply about {{{}groupCoordinator HeartBeat Request{}}}. In that case, consumer 
seems to be stuck.

If both conditions are met, the {{Consumer}} can potentially never receive 
{{TopicPartition}} assignments and may become indefinitely stuck. In the case 
of a new {{broker}} and new {{{}consumer{}}}, when the consumer is created, 
{{consumer background thread}} start to send a request to the broker. (I 
believe this is a {{{}GroupCoordinator Heartbeat request{}}}) During this time, 
if the {{broker}} has not yet loaded metadata from {{{}__consumer_offsets{}}}, 
it will begin to schedule metadata loading. Once the broker has completely 
loaded the metadata, the {{consumer background thread}} recognizes this broker 
as a valid group coordinator. However, there is a possibility that the 
{{consumer}} can send a {{subscribe request}} to the {{broker}} before the 
{{broker}} has replied to the {{{}GroupCoordinator Heartbeat Request{}}}. In 
such a scenario, the {{consumer}} appears to be stuck.

 

You can check this scenario, in the 
{{{}src/test/java/com/example/MyTest#should_fail_because_consumer_try_to_poll_before_background_thread_get_valid_coordinator{}}}.
 If there is no sleep time to wait {{{}GroupCoordinator Heartbeat Request{}}}, 
{{consumer}} will be always stuck. If there is a little sleep time, 
{{consumer}} will always receive assignment.

 

README : 
[https://github.com/chickenchickenlove/new-consumer-error/blob/main/README.md]

 

In my case, consumer get assignment in `docker-compose` : it means not enough 
slow. 

However, consumer cannot get assignmet in `testcontainers` without little 
waiting time. : it means enough slow to cause concurrent issue. 

`testconatiners` is docker in docker, thus `testcontainers` will be slower than 
`docker-compose`. 

  was:
*Related Code*
 * Consumer get assignment Successfully :
 ** 
[https://github.com/chickenchickenlove/new-consumer-error/blob/8c1d74db1ec60350c28f5ed25f595559180dc603/src/test/java/com/example/MyTest.java#L35-L57]
 * Consumer get be stuck Forever because of concurrent issue:
 ** 
https://github.com/chickenchickenlove/new-consumer-error/blob/8c1d74db1ec60350c28f5ed25f595559180dc603/src/test/java/com/example/MyTest.java#L61-L79

 

*Unexpected behaviour*
[|https://github.com/chickenchickenlove/new-consumer-error#unexpected-behaviour]
 * Broker is sufficiently slow.
 * When a KafkaConsumer is created and immediately subscribes to a topic

If both conditions are met, {{Consumer}} can potentially never receive 
{{TopicPartition}} assignments and become stuck indefinitely.

In case of new broker and new consumer, when consumer are created, consumer 
background thread send a request to broker. (I guess groupCoordinator Heartbeat 
request). In that time, if broker does not load metadata from 
{{{}__consumer_offset{}}}, broker will start to schedule load metadata. After 
broker load metadata completely, consumer background thread think 'this broker 
is valid group coordinator'.

However, consumer can send {{subscribe}} request to broker before {{broker}} 
reply about {{{}groupCoordinator HeartBeat Request{}}}. In that case, consumer 
seems to be stuck.

If both conditions are met, the {{Consumer}} can potentially never receive 
{{TopicPartition}} assignments and may become indefinitely stuck. In the case 
of a new {{broker}} and new {{{}consumer{}}}, when the consumer is created, 
{{consumer background thread}} start to send a request to the broker. (I 
believe this is a {{{}GroupCoordinator Heartbeat request{}}}) During this time, 
if the {{broker}} has not yet loaded 

[jira] [Created] (KAFKA-16670) KIP-848 : Consumer will not receive assignment forever because of concurrent issue.

2024-05-05 Thread sanghyeok An (Jira)
sanghyeok An created KAFKA-16670:


 Summary: KIP-848 : Consumer will not receive assignment forever 
because of concurrent issue.
 Key: KAFKA-16670
 URL: https://issues.apache.org/jira/browse/KAFKA-16670
 Project: Kafka
  Issue Type: Bug
Reporter: sanghyeok An


*Related Code*
 * Consumer get assignment Successfully :
 ** 
[https://github.com/chickenchickenlove/new-consumer-error/blob/8c1d74db1ec60350c28f5ed25f595559180dc603/src/test/java/com/example/MyTest.java#L35-L57]
 * Consumer get be stuck Forever because of concurrent issue:
 ** 
https://github.com/chickenchickenlove/new-consumer-error/blob/8c1d74db1ec60350c28f5ed25f595559180dc603/src/test/java/com/example/MyTest.java#L61-L79

 

*Unexpected behaviour*
[|https://github.com/chickenchickenlove/new-consumer-error#unexpected-behaviour]
 * Broker is sufficiently slow.
 * When a KafkaConsumer is created and immediately subscribes to a topic

If both conditions are met, {{Consumer}} can potentially never receive 
{{TopicPartition}} assignments and become stuck indefinitely.

In case of new broker and new consumer, when consumer are created, consumer 
background thread send a request to broker. (I guess groupCoordinator Heartbeat 
request). In that time, if broker does not load metadata from 
{{{}__consumer_offset{}}}, broker will start to schedule load metadata. After 
broker load metadata completely, consumer background thread think 'this broker 
is valid group coordinator'.

However, consumer can send {{subscribe}} request to broker before {{broker}} 
reply about {{{}groupCoordinator HeartBeat Request{}}}. In that case, consumer 
seems to be stuck.

If both conditions are met, the {{Consumer}} can potentially never receive 
{{TopicPartition}} assignments and may become indefinitely stuck. In the case 
of a new {{broker}} and new {{{}consumer{}}}, when the consumer is created, 
{{consumer background thread}} start to send a request to the broker. (I 
believe this is a {{{}GroupCoordinator Heartbeat request{}}}) During this time, 
if the {{broker}} has not yet loaded metadata from {{{}__consumer_offsets{}}}, 
it will begin to schedule metadata loading. Once the broker has completely 
loaded the metadata, the {{consumer background thread}} recognizes this broker 
as a valid group coordinator. However, there is a possibility that the 
{{consumer}} can send a {{subscribe request}} to the {{broker}} before the 
{{broker}} has replied to the {{{}GroupCoordinator Heartbeat Request{}}}. In 
such a scenario, the {{consumer}} appears to be stuck.

You can check this scenario, in the 
{{{}src/test/java/com/example/MyTest#should_fail_because_consumer_try_to_poll_before_background_thread_get_valid_coordinator{}}}.
 If there is no sleep time to wait {{{}GroupCoordinator Heartbeat Request{}}}, 
{{consumer}} will be always stuck. If there is a little sleep time, 
{{consumer}} will always receive assignment.

 

README : 
https://github.com/chickenchickenlove/new-consumer-error/blob/main/README.md



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16657) KIP-848 does not work well on Zookeeper Mode

2024-05-03 Thread sanghyeok An (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843244#comment-17843244
 ] 

sanghyeok An commented on KAFKA-16657:
--

Thanks for your comments. 

I totally understood (y)

> KIP-848 does not work well on Zookeeper Mode
> 
>
> Key: KAFKA-16657
> URL: https://issues.apache.org/jira/browse/KAFKA-16657
> Project: Kafka
>  Issue Type: Bug
>Reporter: sanghyeok An
>Priority: Major
>
> Hi, Kafka Team.
> I am testing the new rebalance protocol of KIP-848. It seems that the KIP-848 
> protocol works well in KRaft mode. However, KIP-848 protocol does not work 
> well in `Zookeeper` mode. 
>  
> I have created two versions of docker-compose files for Zookeeper Mode and 
> KRaft Mode. And I tested KIP-848 using the same consumer code and settings.
>  
> In KRaft Mode, the consumer received the assignment correctly. However, an 
> error occurred in Zookeeper Mode.
>  
> *Is KIP-848 supported in Zookeeper mode? or only KRaft is supported?* 
>  
> FYI, This is the code I used.
>  * ZK docker-compose: 
> https://github.com/chickenchickenlove/kraft-test/blob/main/docker-compose2/docker-compose.yaml
>  * ZK Result: 
> https://github.com/chickenchickenlove/kraft-test/blob/main/docker-compose2/README.md
>  * KRaft docker-compose:  
> https://github.com/chickenchickenlove/kraft-test/blob/main/docker-compose3/docker-compose.yaml
>  * KRaft Result: 
> https://github.com/chickenchickenlove/kraft-test/blob/main/docker-compose3/README.md
>  * Consumer code: 
> https://github.com/chickenchickenlove/kraft-test/blob/main/src/main/java/org/example/Main.java
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16657) KIP-848 does not work well on Zookeeper Mode

2024-05-03 Thread sanghyeok An (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843231#comment-17843231
 ] 

sanghyeok An commented on KAFKA-16657:
--

Thanks for conforming it!! 

Sorry to say, i'm not familiar with AK 4.0.

Could you give more context about AK? (A... Kafka?) 

> KIP-848 does not work well on Zookeeper Mode
> 
>
> Key: KAFKA-16657
> URL: https://issues.apache.org/jira/browse/KAFKA-16657
> Project: Kafka
>  Issue Type: Bug
>Reporter: sanghyeok An
>Priority: Major
>
> Hi, Kafka Team.
> I am testing the new rebalance protocol of KIP-848. It seems that the KIP-848 
> protocol works well in KRaft mode. However, KIP-848 protocol does not work 
> well in `Zookeeper` mode. 
>  
> I have created two versions of docker-compose files for Zookeeper Mode and 
> KRaft Mode. And I tested KIP-848 using the same consumer code and settings.
>  
> In KRaft Mode, the consumer received the assignment correctly. However, an 
> error occurred in Zookeeper Mode.
>  
> *Is KIP-848 supported in Zookeeper mode? or only KRaft is supported?* 
>  
> FYI, This is the code I used.
>  * ZK docker-compose: 
> https://github.com/chickenchickenlove/kraft-test/blob/main/docker-compose2/docker-compose.yaml
>  * ZK Result: 
> https://github.com/chickenchickenlove/kraft-test/blob/main/docker-compose2/README.md
>  * KRaft docker-compose:  
> https://github.com/chickenchickenlove/kraft-test/blob/main/docker-compose3/docker-compose.yaml
>  * KRaft Result: 
> https://github.com/chickenchickenlove/kraft-test/blob/main/docker-compose3/README.md
>  * Consumer code: 
> https://github.com/chickenchickenlove/kraft-test/blob/main/src/main/java/org/example/Main.java
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16657) KIP-848 does not work well on Zookeeper Mode

2024-05-03 Thread sanghyeok An (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17843212#comment-17843212
 ] 

sanghyeok An commented on KAFKA-16657:
--

[~schofielaj] Thanks for your comments. 

Is there a document where that is specified? I tried looking for it in this 
document, but it wasn't there

https://cwiki.apache.org/confluence/display/KAFKA/KIP-848%3A+The+Next+Generation+of+the+Consumer+Rebalance+Protocol

> KIP-848 does not work well on Zookeeper Mode
> 
>
> Key: KAFKA-16657
> URL: https://issues.apache.org/jira/browse/KAFKA-16657
> Project: Kafka
>  Issue Type: Bug
>Reporter: sanghyeok An
>Priority: Major
>
> Hi, Kafka Team.
> I am testing the new rebalance protocol of KIP-848. It seems that the KIP-848 
> protocol works well in KRaft mode. However, KIP-848 protocol does not work 
> well in `Zookeeper` mode. 
>  
> I have created two versions of docker-compose files for Zookeeper Mode and 
> KRaft Mode. And I tested KIP-848 using the same consumer code and settings.
>  
> In KRaft Mode, the consumer received the assignment correctly. However, an 
> error occurred in Zookeeper Mode.
>  
> *Is KIP-848 supported in Zookeeper mode? or only KRaft is supported?* 
>  
> FYI, This is the code I used.
>  * ZK docker-compose: 
> https://github.com/chickenchickenlove/kraft-test/blob/main/docker-compose2/docker-compose.yaml
>  * ZK Result: 
> https://github.com/chickenchickenlove/kraft-test/blob/main/docker-compose2/README.md
>  * KRaft docker-compose:  
> https://github.com/chickenchickenlove/kraft-test/blob/main/docker-compose3/docker-compose.yaml
>  * KRaft Result: 
> https://github.com/chickenchickenlove/kraft-test/blob/main/docker-compose3/README.md
>  * Consumer code: 
> https://github.com/chickenchickenlove/kraft-test/blob/main/src/main/java/org/example/Main.java
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16657) KIP-848 does not work well on Zookeeper Mode

2024-05-02 Thread sanghyeok An (Jira)
sanghyeok An created KAFKA-16657:


 Summary: KIP-848 does not work well on Zookeeper Mode
 Key: KAFKA-16657
 URL: https://issues.apache.org/jira/browse/KAFKA-16657
 Project: Kafka
  Issue Type: Bug
Reporter: sanghyeok An


Hi, Kafka Team.

I am testing the new rebalance protocol of KIP-848. It seems that the KIP-848 
protocol works well in KRaft mode. However, KIP-848 protocol does not work well 
in `Zookeeper` mode. 

 

I have created two versions of docker-compose files for Zookeeper Mode and 
KRaft Mode. And I tested KIP-848 using the same consumer code and settings.

 

In KRaft Mode, the consumer received the assignment correctly. However, an 
error occurred in Zookeeper Mode.

 

*Is KIP-848 supported in Zookeeper mode? or only KRaft is supported?* 

 

FYI, This is the code I used.
 * ZK docker-compose: 
https://github.com/chickenchickenlove/kraft-test/blob/main/docker-compose2/docker-compose.yaml
 * ZK Result: 
https://github.com/chickenchickenlove/kraft-test/blob/main/docker-compose2/README.md
 * KRaft docker-compose:  
https://github.com/chickenchickenlove/kraft-test/blob/main/docker-compose3/docker-compose.yaml
 * KRaft Result: 
https://github.com/chickenchickenlove/kraft-test/blob/main/docker-compose3/README.md
 * Consumer code: 
https://github.com/chickenchickenlove/kraft-test/blob/main/src/main/java/org/example/Main.java

 

 

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16637) KIP-848 does not work well

2024-05-02 Thread sanghyeok An (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17842935#comment-17842935
 ] 

sanghyeok An commented on KAFKA-16637:
--

[~kirktrue] Thanks for your comments. 

I wasn't aware of what you were talking about. Thanks for the detailed 
explanation. 

I'm happy to bring the potential issue to your attention.

> KIP-848 does not work well
> --
>
> Key: KAFKA-16637
> URL: https://issues.apache.org/jira/browse/KAFKA-16637
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer
>Reporter: sanghyeok An
>Assignee: Kirk True
>Priority: Blocker
>  Labels: kip-848-client-support
> Fix For: 3.8.0
>
> Attachments: image-2024-04-30-08-33-06-367.png, 
> image-2024-04-30-08-33-50-435.png
>
>
> I want to test next generation of the consumer rebalance protocol  
> ([https://cwiki.apache.org/confluence/display/KAFKA/The+Next+Generation+of+the+Consumer+Rebalance+Protocol+%28KIP-848%29+-+Early+Access+Release+Notes)]
>  
> However, it does not works well. 
> You can check my condition.
>  
> *Docker-compose.yaml*
> [https://github.com/chickenchickenlove/kraft-test/blob/main/docker-compose/docker-compose.yaml]
>  
> *Consumer Code*
> [https://github.com/chickenchickenlove/kraft-test/blob/main/src/main/java/org/example/Main.java]
>  
> *Consumer logs*
> [main] INFO org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector 
> - initializing Kafka metrics collector
> [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.7.0
> [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: 
> 2ae524ed625438c5
> [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 
> 1714309299215
> [main] INFO org.apache.kafka.clients.consumer.internals.AsyncKafkaConsumer - 
> [Consumer clientId=1-1, groupId=1] Subscribed to topic(s): test-topic1
> [consumer_background_thread] INFO org.apache.kafka.clients.Metadata - 
> [Consumer clientId=1-1, groupId=1] Cluster ID: Some(MkU3OEVBNTcwNTJENDM2Qk)
> Stuck In here...
>  
> *Broker logs* 
> broker    | [2024-04-28 12:42:27,751] INFO Sent auto-creation request for 
> Set(__consumer_offsets) to the active controller. 
> (kafka.server.DefaultAutoTopicCreationManager)
> broker    | [2024-04-28 12:42:27,801] INFO Sent auto-creation request for 
> Set(__consumer_offsets) to the active controller. 
> (kafka.server.DefaultAutoTopicCreationManager)
> broker    | [2024-04-28 12:42:28,211] INFO Sent auto-creation request for 
> Set(__consumer_offsets) to the active controller. 
> (kafka.server.DefaultAutoTopicCreationManager)
> broker    | [2024-04-28 12:42:28,259] INFO Sent auto-creation request for 
> Set(__consumer_offsets) to the active controller. 
> (kafka.server.DefaultAutoTopicCreationManager)
> broker    | [2024-04-28 12:42:28,727] INFO Sent auto-creation request for 
> Set(__consumer_offsets) to the active controller. 
> (kafka.server.DefaultAutoTopicCreationManager)
> stuck in here



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16637) KIP-848 does not work well

2024-05-02 Thread sanghyeok An (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17842934#comment-17842934
 ] 

sanghyeok An commented on KAFKA-16637:
--

Hi [~lianetm]!

I guess that it was problem caused by miss config of 
`controller.quorum.voters`. 

After fixed it, consumer received assignment as response of `heartbeat 
request`. 

 

Here's another question. Is KIP-848 only available in KRaft Mode? Can it also 
be used in Zookeeper Mode? When using KIP-848 in Zookeeper Mode, it throws the 
following error: "Exception in thread "main" 
org.apache.kafka.common.errors.UnsupportedVersionException: The version of API 
is not supported." The consumer throws this error.

 

Here, you can view the docker-compose files I configured for Zookeeper and 
KRaft Mode, and check the results. As you can see from the results, KRaft Mode 
was ok, but Zookeeper was not okay.

 
 * zookeeper mode with docker-compose : 
https://github.com/chickenchickenlove/kraft-test/tree/main/docker-compose2
 * Kraft mode with docker-compose : 
https://github.com/chickenchickenlove/kraft-test/tree/main/docker-compose3

 

> KIP-848 does not work well
> --
>
> Key: KAFKA-16637
> URL: https://issues.apache.org/jira/browse/KAFKA-16637
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer
>Reporter: sanghyeok An
>Assignee: Kirk True
>Priority: Blocker
>  Labels: kip-848-client-support
> Fix For: 3.8.0
>
> Attachments: image-2024-04-30-08-33-06-367.png, 
> image-2024-04-30-08-33-50-435.png
>
>
> I want to test next generation of the consumer rebalance protocol  
> ([https://cwiki.apache.org/confluence/display/KAFKA/The+Next+Generation+of+the+Consumer+Rebalance+Protocol+%28KIP-848%29+-+Early+Access+Release+Notes)]
>  
> However, it does not works well. 
> You can check my condition.
>  
> *Docker-compose.yaml*
> [https://github.com/chickenchickenlove/kraft-test/blob/main/docker-compose/docker-compose.yaml]
>  
> *Consumer Code*
> [https://github.com/chickenchickenlove/kraft-test/blob/main/src/main/java/org/example/Main.java]
>  
> *Consumer logs*
> [main] INFO org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector 
> - initializing Kafka metrics collector
> [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.7.0
> [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: 
> 2ae524ed625438c5
> [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 
> 1714309299215
> [main] INFO org.apache.kafka.clients.consumer.internals.AsyncKafkaConsumer - 
> [Consumer clientId=1-1, groupId=1] Subscribed to topic(s): test-topic1
> [consumer_background_thread] INFO org.apache.kafka.clients.Metadata - 
> [Consumer clientId=1-1, groupId=1] Cluster ID: Some(MkU3OEVBNTcwNTJENDM2Qk)
> Stuck In here...
>  
> *Broker logs* 
> broker    | [2024-04-28 12:42:27,751] INFO Sent auto-creation request for 
> Set(__consumer_offsets) to the active controller. 
> (kafka.server.DefaultAutoTopicCreationManager)
> broker    | [2024-04-28 12:42:27,801] INFO Sent auto-creation request for 
> Set(__consumer_offsets) to the active controller. 
> (kafka.server.DefaultAutoTopicCreationManager)
> broker    | [2024-04-28 12:42:28,211] INFO Sent auto-creation request for 
> Set(__consumer_offsets) to the active controller. 
> (kafka.server.DefaultAutoTopicCreationManager)
> broker    | [2024-04-28 12:42:28,259] INFO Sent auto-creation request for 
> Set(__consumer_offsets) to the active controller. 
> (kafka.server.DefaultAutoTopicCreationManager)
> broker    | [2024-04-28 12:42:28,727] INFO Sent auto-creation request for 
> Set(__consumer_offsets) to the active controller. 
> (kafka.server.DefaultAutoTopicCreationManager)
> stuck in here



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16648) Question: KIP-848 and KafkaTestKit.java

2024-04-30 Thread sanghyeok An (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sanghyeok An updated KAFKA-16648:
-
Description: 
Hi, Kafka Team.
I am writing test code for the new rebalancing protocol proposed in KIP-848.

It works well in general code. However, it does not work properly when creating 
an EmbeddedBroker using KafkaTestKit.java.

 
 

*Phenomena*
 # Create a CombineBroker that acts as both controller and broker using 
KafkaTestKit.
 # Consumer do subscribe() and poll() to created Broker. 
 

At this time, the Consumer sends a HeartBeat Signal to the Broker successfully. 
However, it never receives a Partition Assigned response from the Broker.
 

*What is my broker configs?* 
!image-2024-04-30-19-19-12-316.png|width=530,height=228!
 

*Actual Broker Config.*
!image-2024-04-30-19-20-14-427.png|width=465,height=151!
I set controller.quorum.voters = 0@localhost:9093, but 0@0.0.0.0.0:0 is setted. 
Because of this codes 
([https://github.com/apache/kafka/blob/7c0a302c4da9d53a8fddc504a9fac8d8afecbec8/core/src/test/java/kafka/testkit/KafkaClusterTestKit.java#L305-L307)]
 
 
 

*My opinion.*
I am not familiar with the broker's quorum, but it seems to be the problem.
 
I expect that when the Consumer sends a poll request to the broker, the group 
coordinator broker assigns the topic/partition and then performs quorum for 
each epoch number.
 
However, it seems to not work because the controller to vote is represented as 
0.0.0.0:0.
 
This setting does not work well when applied to containers in docker-compose.
Could this be the cause of the problem?
 
 

*Question*
If {{controller.quorum.voters}} is set to {{0.0.0.0:0}} and i want to use 
consumer group rebalancing through KIP-848, what settings should be applied to 
the brokers and consumers?

  was:
Hi, Kafka Team.
I am writing test code for the new rebalancing protocol proposed in KIP-848.

It works well in general code. However, it does not work properly when creating 
an EmbeddedBroker using KafkaTestKit.java.

 
 
 
### Phenomena
 # Create a CombineBroker that acts as both controller and broker using 
KafkaTestKit.
 # Consumer do subscribe() and poll() to created Broker. 
 

At this time, the Consumer sends a HeartBeat Signal to the Broker successfully. 
However, it never receives a Partition Assigned response from the Broker.
 
### What is my broker configs? 
!image-2024-04-30-19-19-12-316.png|width=530,height=228!
 
### Actual Broker Config.
!image-2024-04-30-19-20-14-427.png|width=465,height=151!
I set controller.quorum.voters = 0@localhost:9093, but 0@0.0.0.0.0:0 is setted. 
Because of this codes 
([https://github.com/apache/kafka/blob/7c0a302c4da9d53a8fddc504a9fac8d8afecbec8/core/src/test/java/kafka/testkit/KafkaClusterTestKit.java#L305-L307)]
 
 
 
### My opinion.
I am not familiar with the broker's quorum, but it seems to be the problem.
 
I expect that when the Consumer sends a poll request to the broker, the group 
coordinator broker assigns the topic/partition and then performs quorum for 
each epoch number.
 
However, it seems to not work because the controller to vote is represented as 
0.0.0.0:0.
 
This setting does not work well when applied to containers in docker-compose.
Could this be the cause of the problem?
 
 
### Question
If {{controller.quorum.voters}} is set to {{0.0.0.0:0}} and i want to use 
consumer group rebalancing through KIP-848, what settings should be applied to 
the brokers and consumers?


> Question: KIP-848 and KafkaTestKit.java
> ---
>
> Key: KAFKA-16648
> URL: https://issues.apache.org/jira/browse/KAFKA-16648
> Project: Kafka
>  Issue Type: Bug
>Reporter: sanghyeok An
>Priority: Minor
> Attachments: image-2024-04-30-19-19-12-316.png, 
> image-2024-04-30-19-20-14-427.png
>
>
> Hi, Kafka Team.
> I am writing test code for the new rebalancing protocol proposed in KIP-848.
> It works well in general code. However, it does not work properly when 
> creating an EmbeddedBroker using KafkaTestKit.java.
>  
>  
> *Phenomena*
>  # Create a CombineBroker that acts as both controller and broker using 
> KafkaTestKit.
>  # Consumer do subscribe() and poll() to created Broker. 
>  
> At this time, the Consumer sends a HeartBeat Signal to the Broker 
> successfully. However, it never receives a Partition Assigned response from 
> the Broker.
>  
> *What is my broker configs?* 
> !image-2024-04-30-19-19-12-316.png|width=530,height=228!
>  
> *Actual Broker Config.*
> !image-2024-04-30-19-20-14-427.png|width=465,height=151!
> I set controller.quorum.voters = 0@localhost:9093, but 0@0.0.0.0.0:0 is 
> setted. 
> Because of this codes 
> ([https://github.com/apache/kafka/blob/7c0a302c4da9d53a8fddc504a9fac8d8afecbec8/core/src/test/java/kafka/testkit/KafkaClusterTestKit.java#L305-L307)]
>  
>  
>  
> *My opinion.*
> I am 

[jira] [Created] (KAFKA-16648) Question: KIP-848 and KafkaTestKit.java

2024-04-30 Thread sanghyeok An (Jira)
sanghyeok An created KAFKA-16648:


 Summary: Question: KIP-848 and KafkaTestKit.java
 Key: KAFKA-16648
 URL: https://issues.apache.org/jira/browse/KAFKA-16648
 Project: Kafka
  Issue Type: Bug
Reporter: sanghyeok An
 Attachments: image-2024-04-30-19-19-12-316.png, 
image-2024-04-30-19-20-14-427.png

Hi, Kafka Team.
I am writing test code for the new rebalancing protocol proposed in KIP-848.

It works well in general code. However, it does not work properly when creating 
an EmbeddedBroker using KafkaTestKit.java.

 
 
 
### Phenomena
 # Create a CombineBroker that acts as both controller and broker using 
KafkaTestKit.
 # Consumer do subscribe() and poll() to created Broker. 
 

At this time, the Consumer sends a HeartBeat Signal to the Broker successfully. 
However, it never receives a Partition Assigned response from the Broker.
 
### What is my broker configs? 
!image-2024-04-30-19-19-12-316.png|width=530,height=228!
 
### Actual Broker Config.
!image-2024-04-30-19-20-14-427.png|width=465,height=151!
I set controller.quorum.voters = 0@localhost:9093, but 0@0.0.0.0.0:0 is setted. 
Because of this codes 
([https://github.com/apache/kafka/blob/7c0a302c4da9d53a8fddc504a9fac8d8afecbec8/core/src/test/java/kafka/testkit/KafkaClusterTestKit.java#L305-L307)]
 
 
 
### My opinion.
I am not familiar with the broker's quorum, but it seems to be the problem.
 
I expect that when the Consumer sends a poll request to the broker, the group 
coordinator broker assigns the topic/partition and then performs quorum for 
each epoch number.
 
However, it seems to not work because the controller to vote is represented as 
0.0.0.0:0.
 
This setting does not work well when applied to containers in docker-compose.
Could this be the cause of the problem?
 
 
### Question
If {{controller.quorum.voters}} is set to {{0.0.0.0:0}} and i want to use 
consumer group rebalancing through KIP-848, what settings should be applied to 
the brokers and consumers?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16637) KIP-848 does not work well

2024-04-29 Thread sanghyeok An (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17842184#comment-17842184
 ] 

sanghyeok An commented on KAFKA-16637:
--

[~kirktrue] thanks for your comments. 

I didn't catch issued you mentioned. i will check it out! 

However,  to provide a non-zero duration, still does not work well;

!image-2024-04-30-08-33-06-367.png|width=839,height=289!

I changed the my code. (Duration.ofSeconds(1)). 

However, same logs are printed. 

!image-2024-04-30-08-33-50-435.png|width=1116,height=139!

 

Is there any workaround? 

> KIP-848 does not work well
> --
>
> Key: KAFKA-16637
> URL: https://issues.apache.org/jira/browse/KAFKA-16637
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer
>Reporter: sanghyeok An
>Assignee: Kirk True
>Priority: Minor
>  Labels: kip-848-client-support
> Fix For: 3.8.0
>
> Attachments: image-2024-04-30-08-33-06-367.png, 
> image-2024-04-30-08-33-50-435.png
>
>
> I want to test next generation of the consumer rebalance protocol  
> ([https://cwiki.apache.org/confluence/display/KAFKA/The+Next+Generation+of+the+Consumer+Rebalance+Protocol+%28KIP-848%29+-+Early+Access+Release+Notes)]
>  
> However, it does not works well. 
> You can check my condition.
>  
> *Docker-compose.yaml*
> [https://github.com/chickenchickenlove/kraft-test/blob/main/docker-compose/docker-compose.yaml]
>  
> *Consumer Code*
> [https://github.com/chickenchickenlove/kraft-test/blob/main/src/main/java/org/example/Main.java]
>  
> *Consumer logs*
> [main] INFO org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector 
> - initializing Kafka metrics collector
> [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.7.0
> [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: 
> 2ae524ed625438c5
> [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 
> 1714309299215
> [main] INFO org.apache.kafka.clients.consumer.internals.AsyncKafkaConsumer - 
> [Consumer clientId=1-1, groupId=1] Subscribed to topic(s): test-topic1
> [consumer_background_thread] INFO org.apache.kafka.clients.Metadata - 
> [Consumer clientId=1-1, groupId=1] Cluster ID: Some(MkU3OEVBNTcwNTJENDM2Qk)
> Stuck In here...
>  
> *Broker logs* 
> broker    | [2024-04-28 12:42:27,751] INFO Sent auto-creation request for 
> Set(__consumer_offsets) to the active controller. 
> (kafka.server.DefaultAutoTopicCreationManager)
> broker    | [2024-04-28 12:42:27,801] INFO Sent auto-creation request for 
> Set(__consumer_offsets) to the active controller. 
> (kafka.server.DefaultAutoTopicCreationManager)
> broker    | [2024-04-28 12:42:28,211] INFO Sent auto-creation request for 
> Set(__consumer_offsets) to the active controller. 
> (kafka.server.DefaultAutoTopicCreationManager)
> broker    | [2024-04-28 12:42:28,259] INFO Sent auto-creation request for 
> Set(__consumer_offsets) to the active controller. 
> (kafka.server.DefaultAutoTopicCreationManager)
> broker    | [2024-04-28 12:42:28,727] INFO Sent auto-creation request for 
> Set(__consumer_offsets) to the active controller. 
> (kafka.server.DefaultAutoTopicCreationManager)
> stuck in here



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16637) KIP-848 does not work well

2024-04-29 Thread sanghyeok An (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sanghyeok An updated KAFKA-16637:
-
Attachment: image-2024-04-30-08-33-06-367.png

> KIP-848 does not work well
> --
>
> Key: KAFKA-16637
> URL: https://issues.apache.org/jira/browse/KAFKA-16637
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer
>Reporter: sanghyeok An
>Assignee: Kirk True
>Priority: Minor
>  Labels: kip-848-client-support
> Fix For: 3.8.0
>
> Attachments: image-2024-04-30-08-33-06-367.png, 
> image-2024-04-30-08-33-50-435.png
>
>
> I want to test next generation of the consumer rebalance protocol  
> ([https://cwiki.apache.org/confluence/display/KAFKA/The+Next+Generation+of+the+Consumer+Rebalance+Protocol+%28KIP-848%29+-+Early+Access+Release+Notes)]
>  
> However, it does not works well. 
> You can check my condition.
>  
> *Docker-compose.yaml*
> [https://github.com/chickenchickenlove/kraft-test/blob/main/docker-compose/docker-compose.yaml]
>  
> *Consumer Code*
> [https://github.com/chickenchickenlove/kraft-test/blob/main/src/main/java/org/example/Main.java]
>  
> *Consumer logs*
> [main] INFO org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector 
> - initializing Kafka metrics collector
> [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.7.0
> [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: 
> 2ae524ed625438c5
> [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 
> 1714309299215
> [main] INFO org.apache.kafka.clients.consumer.internals.AsyncKafkaConsumer - 
> [Consumer clientId=1-1, groupId=1] Subscribed to topic(s): test-topic1
> [consumer_background_thread] INFO org.apache.kafka.clients.Metadata - 
> [Consumer clientId=1-1, groupId=1] Cluster ID: Some(MkU3OEVBNTcwNTJENDM2Qk)
> Stuck In here...
>  
> *Broker logs* 
> broker    | [2024-04-28 12:42:27,751] INFO Sent auto-creation request for 
> Set(__consumer_offsets) to the active controller. 
> (kafka.server.DefaultAutoTopicCreationManager)
> broker    | [2024-04-28 12:42:27,801] INFO Sent auto-creation request for 
> Set(__consumer_offsets) to the active controller. 
> (kafka.server.DefaultAutoTopicCreationManager)
> broker    | [2024-04-28 12:42:28,211] INFO Sent auto-creation request for 
> Set(__consumer_offsets) to the active controller. 
> (kafka.server.DefaultAutoTopicCreationManager)
> broker    | [2024-04-28 12:42:28,259] INFO Sent auto-creation request for 
> Set(__consumer_offsets) to the active controller. 
> (kafka.server.DefaultAutoTopicCreationManager)
> broker    | [2024-04-28 12:42:28,727] INFO Sent auto-creation request for 
> Set(__consumer_offsets) to the active controller. 
> (kafka.server.DefaultAutoTopicCreationManager)
> stuck in here



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16637) KIP-848 does not work well

2024-04-29 Thread sanghyeok An (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sanghyeok An updated KAFKA-16637:
-
Attachment: image-2024-04-30-08-33-50-435.png

> KIP-848 does not work well
> --
>
> Key: KAFKA-16637
> URL: https://issues.apache.org/jira/browse/KAFKA-16637
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer
>Reporter: sanghyeok An
>Assignee: Kirk True
>Priority: Minor
>  Labels: kip-848-client-support
> Fix For: 3.8.0
>
> Attachments: image-2024-04-30-08-33-06-367.png, 
> image-2024-04-30-08-33-50-435.png
>
>
> I want to test next generation of the consumer rebalance protocol  
> ([https://cwiki.apache.org/confluence/display/KAFKA/The+Next+Generation+of+the+Consumer+Rebalance+Protocol+%28KIP-848%29+-+Early+Access+Release+Notes)]
>  
> However, it does not works well. 
> You can check my condition.
>  
> *Docker-compose.yaml*
> [https://github.com/chickenchickenlove/kraft-test/blob/main/docker-compose/docker-compose.yaml]
>  
> *Consumer Code*
> [https://github.com/chickenchickenlove/kraft-test/blob/main/src/main/java/org/example/Main.java]
>  
> *Consumer logs*
> [main] INFO org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector 
> - initializing Kafka metrics collector
> [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.7.0
> [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: 
> 2ae524ed625438c5
> [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 
> 1714309299215
> [main] INFO org.apache.kafka.clients.consumer.internals.AsyncKafkaConsumer - 
> [Consumer clientId=1-1, groupId=1] Subscribed to topic(s): test-topic1
> [consumer_background_thread] INFO org.apache.kafka.clients.Metadata - 
> [Consumer clientId=1-1, groupId=1] Cluster ID: Some(MkU3OEVBNTcwNTJENDM2Qk)
> Stuck In here...
>  
> *Broker logs* 
> broker    | [2024-04-28 12:42:27,751] INFO Sent auto-creation request for 
> Set(__consumer_offsets) to the active controller. 
> (kafka.server.DefaultAutoTopicCreationManager)
> broker    | [2024-04-28 12:42:27,801] INFO Sent auto-creation request for 
> Set(__consumer_offsets) to the active controller. 
> (kafka.server.DefaultAutoTopicCreationManager)
> broker    | [2024-04-28 12:42:28,211] INFO Sent auto-creation request for 
> Set(__consumer_offsets) to the active controller. 
> (kafka.server.DefaultAutoTopicCreationManager)
> broker    | [2024-04-28 12:42:28,259] INFO Sent auto-creation request for 
> Set(__consumer_offsets) to the active controller. 
> (kafka.server.DefaultAutoTopicCreationManager)
> broker    | [2024-04-28 12:42:28,727] INFO Sent auto-creation request for 
> Set(__consumer_offsets) to the active controller. 
> (kafka.server.DefaultAutoTopicCreationManager)
> stuck in here



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16637) KIP-848 does not work well

2024-04-28 Thread sanghyeok An (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sanghyeok An updated KAFKA-16637:
-
Description: 
I want to test next generation of the consumer rebalance protocol  
([https://cwiki.apache.org/confluence/display/KAFKA/The+Next+Generation+of+the+Consumer+Rebalance+Protocol+%28KIP-848%29+-+Early+Access+Release+Notes)]

 

However, it does not works well. 

You can check my condition.

 

*Docker-compose.yaml*

[https://github.com/chickenchickenlove/kraft-test/blob/main/docker-compose/docker-compose.yaml]

 

*Consumer Code*

[https://github.com/chickenchickenlove/kraft-test/blob/main/src/main/java/org/example/Main.java]

 

*Consumer logs*

[main] INFO org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector - 
initializing Kafka metrics collector
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.7.0
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: 
2ae524ed625438c5
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 
1714309299215
[main] INFO org.apache.kafka.clients.consumer.internals.AsyncKafkaConsumer - 
[Consumer clientId=1-1, groupId=1] Subscribed to topic(s): test-topic1
[consumer_background_thread] INFO org.apache.kafka.clients.Metadata - [Consumer 
clientId=1-1, groupId=1] Cluster ID: Some(MkU3OEVBNTcwNTJENDM2Qk)

Stuck In here...

 

*Broker logs* 

broker    | [2024-04-28 12:42:27,751] INFO Sent auto-creation request for 
Set(__consumer_offsets) to the active controller. 
(kafka.server.DefaultAutoTopicCreationManager)
broker    | [2024-04-28 12:42:27,801] INFO Sent auto-creation request for 
Set(__consumer_offsets) to the active controller. 
(kafka.server.DefaultAutoTopicCreationManager)
broker    | [2024-04-28 12:42:28,211] INFO Sent auto-creation request for 
Set(__consumer_offsets) to the active controller. 
(kafka.server.DefaultAutoTopicCreationManager)
broker    | [2024-04-28 12:42:28,259] INFO Sent auto-creation request for 
Set(__consumer_offsets) to the active controller. 
(kafka.server.DefaultAutoTopicCreationManager)
broker    | [2024-04-28 12:42:28,727] INFO Sent auto-creation request for 
Set(__consumer_offsets) to the active controller. 
(kafka.server.DefaultAutoTopicCreationManager)

stuck in here

  was:
I want to test next generation of the consumer rebalance protocol  
([https://cwiki.apache.org/confluence/display/KAFKA/The+Next+Generation+of+the+Consumer+Rebalance+Protocol+%28KIP-848%29+-+Early+Access+Release+Notes)]

 

However, it does not works well. 

You can check my condition.

 

### Docker-compose.yaml

[https://github.com/chickenchickenlove/kraft-test/blob/main/docker-compose/docker-compose.yaml]

 

### Consumer Code

[https://github.com/chickenchickenlove/kraft-test/blob/main/src/main/java/org/example/Main.java]

 

### Consumer logs

[main] INFO org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector - 
initializing Kafka metrics collector
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.7.0
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: 
2ae524ed625438c5
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 
1714309299215
[main] INFO org.apache.kafka.clients.consumer.internals.AsyncKafkaConsumer - 
[Consumer clientId=1-1, groupId=1] Subscribed to topic(s): test-topic1
[consumer_background_thread] INFO org.apache.kafka.clients.Metadata - [Consumer 
clientId=1-1, groupId=1] Cluster ID: Some(MkU3OEVBNTcwNTJENDM2Qk)

Stuck In here...

 

### Broker logs 

broker    | [2024-04-28 12:42:27,751] INFO Sent auto-creation request for 
Set(__consumer_offsets) to the active controller. 
(kafka.server.DefaultAutoTopicCreationManager)
broker    | [2024-04-28 12:42:27,801] INFO Sent auto-creation request for 
Set(__consumer_offsets) to the active controller. 
(kafka.server.DefaultAutoTopicCreationManager)
broker    | [2024-04-28 12:42:28,211] INFO Sent auto-creation request for 
Set(__consumer_offsets) to the active controller. 
(kafka.server.DefaultAutoTopicCreationManager)
broker    | [2024-04-28 12:42:28,259] INFO Sent auto-creation request for 
Set(__consumer_offsets) to the active controller. 
(kafka.server.DefaultAutoTopicCreationManager)
broker    | [2024-04-28 12:42:28,727] INFO Sent auto-creation request for 
Set(__consumer_offsets) to the active controller. 
(kafka.server.DefaultAutoTopicCreationManager)

stuck in here


> KIP-848 does not work well
> --
>
> Key: KAFKA-16637
> URL: https://issues.apache.org/jira/browse/KAFKA-16637
> Project: Kafka
>  Issue Type: Bug
>Reporter: sanghyeok An
>Priority: Minor
>
> I want to test next generation of the consumer rebalance protocol  
> 

[jira] [Updated] (KAFKA-16637) KIP-848 does not work well

2024-04-28 Thread sanghyeok An (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sanghyeok An updated KAFKA-16637:
-
Description: 
I want to test next generation of the consumer rebalance protocol  
([https://cwiki.apache.org/confluence/display/KAFKA/The+Next+Generation+of+the+Consumer+Rebalance+Protocol+%28KIP-848%29+-+Early+Access+Release+Notes)]

 

However, it does not works well. 

You can check my condition.

 

### Docker-compose.yaml

[https://github.com/chickenchickenlove/kraft-test/blob/main/docker-compose/docker-compose.yaml]

 

### Consumer Code

[https://github.com/chickenchickenlove/kraft-test/blob/main/src/main/java/org/example/Main.java]

 

### Consumer logs

[main] INFO org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector - 
initializing Kafka metrics collector
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.7.0
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: 
2ae524ed625438c5
[main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 
1714309299215
[main] INFO org.apache.kafka.clients.consumer.internals.AsyncKafkaConsumer - 
[Consumer clientId=1-1, groupId=1] Subscribed to topic(s): test-topic1
[consumer_background_thread] INFO org.apache.kafka.clients.Metadata - [Consumer 
clientId=1-1, groupId=1] Cluster ID: Some(MkU3OEVBNTcwNTJENDM2Qk)

Stuck In here...

 

### Broker logs 

broker    | [2024-04-28 12:42:27,751] INFO Sent auto-creation request for 
Set(__consumer_offsets) to the active controller. 
(kafka.server.DefaultAutoTopicCreationManager)
broker    | [2024-04-28 12:42:27,801] INFO Sent auto-creation request for 
Set(__consumer_offsets) to the active controller. 
(kafka.server.DefaultAutoTopicCreationManager)
broker    | [2024-04-28 12:42:28,211] INFO Sent auto-creation request for 
Set(__consumer_offsets) to the active controller. 
(kafka.server.DefaultAutoTopicCreationManager)
broker    | [2024-04-28 12:42:28,259] INFO Sent auto-creation request for 
Set(__consumer_offsets) to the active controller. 
(kafka.server.DefaultAutoTopicCreationManager)
broker    | [2024-04-28 12:42:28,727] INFO Sent auto-creation request for 
Set(__consumer_offsets) to the active controller. 
(kafka.server.DefaultAutoTopicCreationManager)

stuck in here

> KIP-848 does not work well
> --
>
> Key: KAFKA-16637
> URL: https://issues.apache.org/jira/browse/KAFKA-16637
> Project: Kafka
>  Issue Type: Bug
>Reporter: sanghyeok An
>Priority: Minor
>
> I want to test next generation of the consumer rebalance protocol  
> ([https://cwiki.apache.org/confluence/display/KAFKA/The+Next+Generation+of+the+Consumer+Rebalance+Protocol+%28KIP-848%29+-+Early+Access+Release+Notes)]
>  
> However, it does not works well. 
> You can check my condition.
>  
> ### Docker-compose.yaml
> [https://github.com/chickenchickenlove/kraft-test/blob/main/docker-compose/docker-compose.yaml]
>  
> ### Consumer Code
> [https://github.com/chickenchickenlove/kraft-test/blob/main/src/main/java/org/example/Main.java]
>  
> ### Consumer logs
> [main] INFO org.apache.kafka.common.telemetry.internals.KafkaMetricsCollector 
> - initializing Kafka metrics collector
> [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.7.0
> [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: 
> 2ae524ed625438c5
> [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 
> 1714309299215
> [main] INFO org.apache.kafka.clients.consumer.internals.AsyncKafkaConsumer - 
> [Consumer clientId=1-1, groupId=1] Subscribed to topic(s): test-topic1
> [consumer_background_thread] INFO org.apache.kafka.clients.Metadata - 
> [Consumer clientId=1-1, groupId=1] Cluster ID: Some(MkU3OEVBNTcwNTJENDM2Qk)
> Stuck In here...
>  
> ### Broker logs 
> broker    | [2024-04-28 12:42:27,751] INFO Sent auto-creation request for 
> Set(__consumer_offsets) to the active controller. 
> (kafka.server.DefaultAutoTopicCreationManager)
> broker    | [2024-04-28 12:42:27,801] INFO Sent auto-creation request for 
> Set(__consumer_offsets) to the active controller. 
> (kafka.server.DefaultAutoTopicCreationManager)
> broker    | [2024-04-28 12:42:28,211] INFO Sent auto-creation request for 
> Set(__consumer_offsets) to the active controller. 
> (kafka.server.DefaultAutoTopicCreationManager)
> broker    | [2024-04-28 12:42:28,259] INFO Sent auto-creation request for 
> Set(__consumer_offsets) to the active controller. 
> (kafka.server.DefaultAutoTopicCreationManager)
> broker    | [2024-04-28 12:42:28,727] INFO Sent auto-creation request for 
> Set(__consumer_offsets) to the active controller. 
> (kafka.server.DefaultAutoTopicCreationManager)
> stuck in here



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16637) KIP-848 does not work well

2024-04-28 Thread sanghyeok An (Jira)
sanghyeok An created KAFKA-16637:


 Summary: KIP-848 does not work well
 Key: KAFKA-16637
 URL: https://issues.apache.org/jira/browse/KAFKA-16637
 Project: Kafka
  Issue Type: Bug
Reporter: sanghyeok An






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16637) KIP-848 does not work well

2024-04-28 Thread sanghyeok An (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sanghyeok An updated KAFKA-16637:
-
Priority: Minor  (was: Major)

> KIP-848 does not work well
> --
>
> Key: KAFKA-16637
> URL: https://issues.apache.org/jira/browse/KAFKA-16637
> Project: Kafka
>  Issue Type: Bug
>Reporter: sanghyeok An
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-15951) MissingSourceTopicException should include topic names

2024-03-22 Thread sanghyeok An (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17830019#comment-17830019
 ] 

sanghyeok An commented on KAFKA-15951:
--

[~mjsax] Hi!

I have a tiny PR issue and suggestion for this issue. 
(https://github.com/apache/kafka/pull/15573)

when you have some free time, could you please take a look? 

thanks in advanced! 

> MissingSourceTopicException should include topic names
> --
>
> Key: KAFKA-15951
> URL: https://issues.apache.org/jira/browse/KAFKA-15951
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Priority: Major
>
> As the title say – we don't include topic names in all cases, what make it 
> hard for users to identify the root cause more clearly.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-15951) MissingSourceTopicException should include topic names

2024-03-20 Thread sanghyeok An (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17828944#comment-17828944
 ] 

sanghyeok An commented on KAFKA-15951:
--

Gently ping, [~mjsax].

When you have free time, could you check comments? :)

> MissingSourceTopicException should include topic names
> --
>
> Key: KAFKA-15951
> URL: https://issues.apache.org/jira/browse/KAFKA-15951
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Priority: Major
>
> As the title say – we don't include topic names in all cases, what make it 
> hard for users to identify the root cause more clearly.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-15951) MissingSourceTopicException should include topic names

2024-03-18 Thread sanghyeok An (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17827968#comment-17827968
 ] 

sanghyeok An commented on KAFKA-15951:
--

Hi, [~mjsax] !

May i take this issue and work for it? 

I take a look at code already, and i think i can handle it.

> MissingSourceTopicException should include topic names
> --
>
> Key: KAFKA-15951
> URL: https://issues.apache.org/jira/browse/KAFKA-15951
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Priority: Major
>
> As the title say – we don't include topic names in all cases, what make it 
> hard for users to identify the root cause more clearly.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)