[
https://issues.apache.org/jira/browse/KAFKA-557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13473728#comment-13473728
]
Neha Narkhede commented on KAFKA-557:
-------------------------------------
1. Log
1.1 Typos in description of analyzeAndValidateMessageSet
1.2 Wrap long lines in this file
1.3 In the append API, the offsets are first read into an offsets variable and
then returned from outside the try catch block. Any reason why the flush and
offsets are not inside the try block ? Also, to avoid Scala bugs, it might be
better to turn the outer if-else into a case match since case match always
evalues to a value. If it doesn't, the code doesn't compile
2. LogTest
2.1 Remove unused import
2.2 It looks like the expected value in the following assertEquals is
log.logEndOffset ? If yes, it should be the first argument
assertEquals(last + 1, log.logEndOffset)
2.3 In the following assertTrue, should we be using sizeInBytes instead ?
assertTrue(log.read(5, 64*1024).size > 0)
2.4 Does it make sense to check that the offsets of the message set read from
the log ?
> Replica fetch thread doesn't need to recompute message id
> ---------------------------------------------------------
>
> Key: KAFKA-557
> URL: https://issues.apache.org/jira/browse/KAFKA-557
> Project: Kafka
> Issue Type: Bug
> Components: core
> Affects Versions: 0.8
> Reporter: Jun Rao
> Priority: Blocker
> Labels: bugs
> Attachments: KAFKA-557.patch
>
> Original Estimate: 24h
> Remaining Estimate: 24h
>
> With kafka-506, the leader broker computes the logical id for each message
> produced. This could involve decompressing and recompressing messages, which
> are expensive. When data is replicated from the leader to the follower, we
> could avoid recomputing the logical message id since it's the same.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira