[ https://issues.apache.org/jira/browse/KAFKA-5403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16129866#comment-16129866 ]
Apurva Mehta edited comment on KAFKA-5403 at 8/17/17 4:19 AM: -------------------------------------------------------------- I think we should punt on this. The problems with the patch are not easy to fix as the {{VerifiableConsumer}} validates that offsets are sequential, which is not true when you have transactions. And the {{ConsoleConsumer}} doesn't expose offsets. So modifying either without breaking compatibility will take time. Also, the system test has been running reliably for months without suffering any problems with duplicate reads on the same offset. was (Author: apurva): I think we should punt on this. The problems with the patch are not easy to fix as the {{VerifiableConsumer}} validates that offsets are sequential, which is not true when you have transactions. And the {{ConsoleConsumer}} doesn't expose offsets. So modifying either breaking compatibility will take time. Also, the system test has been running reliably for months without suffering any problems with duplicate reads on the same offset. > Transactions system test should dedup consumed messages by offset > ----------------------------------------------------------------- > > Key: KAFKA-5403 > URL: https://issues.apache.org/jira/browse/KAFKA-5403 > Project: Kafka > Issue Type: Bug > Affects Versions: 0.11.0.0 > Reporter: Apurva Mehta > Assignee: Apurva Mehta > Fix For: 1.0.0 > > > In KAFKA-5396, we saw that the consumers which verify the data in multiple > topics could read the same offsets multiple times, for instance when a > rebalance happens. > This would detect spurious duplicates, causing the test to fail. We should > dedup the consumed messages by offset and only fail the test if we have > duplicate values for a if for a unique set of offsets. -- This message was sent by Atlassian JIRA (v6.4.14#64029)