[
https://issues.apache.org/jira/browse/KAFKA-598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Joel Koshy updated KAFKA-598:
-----------------------------
Attachment: KAFKA-598-v3.patch
Quick overview of revised patch:
1 - Addressed your comment about the previous behavior in ConsumerIterator
(good catch on that!) and the config defaults.
2 - Changed semantics of fetch size to max memory. Max mem is a long (as int
would currently limit to 2G). The actual partition fetch size is checked
for overflow (in which case it is set to Int.MaxValue).
3 - Also introduced a DeprecatedProperties convenience class that will be
checked upon config verification. I added this because i think max.memory
is a more meaningful config than fetch.size and we can use this to
deprecate other configs if needed.
4 - The partition count is a volatile int - I chose that over a method only to
avoid traversal (for each request) to determine the count.
> decouple fetch size from max message size
> -----------------------------------------
>
> Key: KAFKA-598
> URL: https://issues.apache.org/jira/browse/KAFKA-598
> Project: Kafka
> Issue Type: Bug
> Components: core
> Affects Versions: 0.8
> Reporter: Jun Rao
> Assignee: Joel Koshy
> Priority: Blocker
> Attachments: KAFKA-598-v1.patch, KAFKA-598-v2.patch,
> KAFKA-598-v3.patch
>
>
> Currently, a consumer has to set fetch size larger than the max message size.
> This increases the memory footprint on the consumer, especially when a large
> number of topic/partition is subscribed. By decoupling the fetch size from
> max message size, we can use a smaller fetch size for normal consumption and
> when hitting a large message (hopefully rare), we automatically increase
> fetch size to max message size temporarily.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira