[
https://issues.apache.org/jira/browse/KAFKA-17829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17896174#comment-17896174
]
Abhinav Dixit edited comment on KAFKA-17829 at 11/7/24 7:03 AM:
----------------------------------------------------------------
I ran a few tests using 1 kafka broker to verify if there is any problem if
there are requests that are present in the purgatory while replica manager is
closed. Here is one of the tests I ran that proves its not a problem -
I increases the maxWait for ShareFetch request to 10 minutes, so the requests
remain in the purgatory for a while. Do some produce and consume using share
consumers and then let some share fetch requests remain stuck. Once I am
absolutely confident that some fetch requests are stuck (with the help of
logs), I kill my broker (unclean shutdown happens in this case). The broker
shuts down without any problem. Then I restart my broker, and continue doing
produce and consume using console share consumers. This works absolutely fine
which means that even if some requests are not getting completed, they will be
reissued when the server is respawned and it won't cause any problems. Now, if
a do either a clean shutdown of the broker or unclean shutdown, the brokers
gets killed as expected.
was (Author: JIRAUSER303719):
I ran a few tests using 1 kafka broker to verify if there is any problem if
there are requests that are present in the purgatory while replica manager is
closed. Here is one of the tests I ran that prove its not a problem -
I increase the maxWait for ShareFetch request to 10 minutes, so the requests
remain in the purgatory for a while. Do some produce and consume using share
consumers and then let some share fetch requests remain stuck. Once I am
absolutely confident that some fetch requests are stuck (with the help of
logs), I kill my broker (unclean shutdown happens in this case). The broker
shuts down without any problem. Then I restart my broker, and continue doing
produce and consume using console share consumers. This works absolutely fine
which means even if some requests are not getting completed, they will be
reissued when the server is respawned and it won't cause any problems. Now, if
a do either a clean shutdown of the broker or unclean shutdown, the brokers
gets killed as expected.
> Verify ShareFetch requests return a completed/erroneous future on purgatory
> close
> ---------------------------------------------------------------------------------
>
> Key: KAFKA-17829
> URL: https://issues.apache.org/jira/browse/KAFKA-17829
> Project: Kafka
> Issue Type: Sub-task
> Reporter: Abhinav Dixit
> Assignee: Abhinav Dixit
> Priority: Major
>
> We need to verify that on shutdown of the delayed share fetch purgatory, the
> share fetch requests which are present inside the purgatory return with an
> erroneous future or a completed future.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)