[
https://issues.apache.org/jira/browse/KAFKA-573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13475919#comment-13475919
]
John Fung commented on KAFKA-573:
---------------------------------
These are some manual verification of the differences in the log segment files:
• Merging the log segment files for each broker:
$ for i in `find kafka_server_1_logs/ -name '0*.log' | sort`; do cat $i >>
broker-1-merged/00000000000000000000.log; done
$ for i in `find kafka_server_2_logs/ -name '0*.log' | sort`; do cat $i >>
broker-2-merged/00000000000000000000.log; done
$ for i in `find kafka_server_3_logs/ -name '0*.log' | sort`; do cat $i >>
broker-3-merged/00000000000000000000.log; done
• Verify the checksum of each merged log segment and they are different:
$ cksum broker-1-merged/00000000000000000000.log
1742950004 1638036 broker-1-merged/00000000000000000000.log
$ cksum broker-2-merged/00000000000000000000.log
2050258314 1639080 broker-2-merged/00000000000000000000.log
$ cksum broker-3-merged/00000000000000000000.log
1802214049 1639080 broker-3-merged/00000000000000000000.log
• Get the dump log segments of the merged files:
$ bin/kafka-run-class.sh kafka.tools.DumpLogSegments
broker-1-merged/00000000000000000000.log > broker-1-dump-log-segment.log
$ bin/kafka-run-class.sh kafka.tools.DumpLogSegments
broker-2-merged/00000000000000000000.log > broker-2-dump-log-segment.log
$ bin/kafka-run-class.sh kafka.tools.DumpLogSegments
broker-3-merged/00000000000000000000.log > broker-3-dump-log-segment.log
• Diff the dump log segment between broker-1 & broker-2:
$ diff broker-1-dump-log-segment.log broker-2-dump-log-segment.log
1c1
< Dumping broker-1-merged/00000000000000000000.log
---
> Dumping broker-2-merged/00000000000000000000.log
113a114
> offset: 112 isvalid: true payloadsize: 500 magic: 2 compresscodec:
> NoCompressionCodec crc: 2581499653
168a170
> offset: 168 isvalid: true payloadsize: 500 magic: 2 compresscodec:
> NoCompressionCodec crc: 3880215630
387a390
> offset: 389 isvalid: true payloadsize: 500 magic: 2 compresscodec:
> NoCompressionCodec crc: 3744939326
2734d2736
< offset: 2737 isvalid: true payloadsize: 500 magic: 2 compresscodec:
NoCompressionCodec crc: 314900536
> System Test : Leader Failure Data Loss When request-num-acks is 1
> -----------------------------------------------------------------
>
> Key: KAFKA-573
> URL: https://issues.apache.org/jira/browse/KAFKA-573
> Project: Kafka
> Issue Type: Bug
> Reporter: John Fung
> Attachments: acks1_leader_failure_data_loss.tar.gz,
> kafka-573-reproduce-issue.patch
>
>
> • Test Description:
> 1. Start a 3-broker cluster as source
> 2. Send messages to source cluster
> 3. Find leader and terminate it (kill -15)
> 4. Start the broker again
> 5. Start a consumer to consume data
> 6. Compare the MessageID in the data between producer log and consumer log.
> • Issue: There will be data loss if request-num-acks is set to 1.
> • To reproduce this issue, please do the followings:
> 1. Download the latest 0.8 branch
> 2. Apply the patch attached to this JIRA
> 3. Build kafka by running "./sbt update package"
> 4. Execute the test in directory "system_test" : "python -B
> system_test_runner.py"
> 5. This test will execute testcase_2 with the following settings:
> Replica factor : 3
> No. of partitions : 1
> No. of bouncing : 1
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira