[
https://issues.apache.org/jira/browse/KAFKA-703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13559018#comment-13559018
]
Sriram Subramanian commented on KAFKA-703:
------------------------------------------
Can we move this jira to the next version since we have decided to punt this?
> A fetch request in Fetch Purgatory can double count the bytes from the same
> delayed produce request
> ---------------------------------------------------------------------------------------------------
>
> Key: KAFKA-703
> URL: https://issues.apache.org/jira/browse/KAFKA-703
> Project: Kafka
> Issue Type: Bug
> Components: purgatory
> Affects Versions: 0.8
> Reporter: Sriram Subramanian
> Assignee: Sriram Subramanian
> Priority: Blocker
> Fix For: 0.8
>
>
> When a producer request is handled, the fetch purgatory is checked to ensure
> any fetch requests are satisfied. When the produce request is satisfied we do
> the check again and if the same fetch request was still in the fetch
> purgatory it would end up double counting the bytes received.
> Possible Solutions
> 1. In the delayed produce request case, do the check only after the produce
> request is satisfied. This could potentially delay the fetch request from
> being satisfied.
> 2. Remove dependency of fetch request on produce request and just look at the
> last logical log offset (which should mostly be cached). This would need the
> replica.fetch.min.bytes to be number of messages rather than bytes. This also
> helps KAFKA-671 in that we would no longer need to pass the ProduceRequest
> object to the producer purgatory and hence not have to consume any memory.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira