[ 
https://issues.apache.org/jira/browse/KAFKA-2477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14730713#comment-14730713
 ] 

Håkon Hitland commented on KAFKA-2477:
--------------------------------------

I don't see any out of sequence offsets.
Here are a couple of recent examples.
If I run with --deep-iteration, all offsets are present and sequential.
The result on the replica is identical to the leader.
---
[2015-09-02 23:43:03,379] ERROR [Replica Manager on Broker 0]: Error when 
processing fetch request for partition [log.event,3] offset 10591627212 from 
follower with correlation id 391785394. Possible cause: Request for offset 
10591627212 but we only have log segments in the range 10444248800 to 
10591627211. (kafka.server.ReplicaManager)

offset: 10591627210 position: 994954613 isvalid: true payloadsize: 674 magic: 0 
compresscodec: SnappyCompressionCodec crc: 4144791071
offset: 10591627211 position: 994955313 isvalid: true payloadsize: 1255 magic: 
0 compresscodec: SnappyCompressionCodec crc: 1011806998
offset: 10591627213 position: 994956594 isvalid: true payloadsize: 1460 magic: 
0 compresscodec: SnappyCompressionCodec crc: 4145284502
offset: 10591627215 position: 994958080 isvalid: true payloadsize: 1719 magic: 
0 compresscodec: SnappyCompressionCodec crc: 444418110

----

[2015-09-03 11:44:02,483] ERROR [Replica Manager on Broker 3]: Error when 
processing fetch request for partition [log.count,5] offset 69746066284 from 
follower with correlation id 239821628. Possible cause: Request for offset 
69746066284 but we only have log segments in the range 68788206610 to 
69746066280. (kafka.server.ReplicaManager)

offset: 69746066278 position: 464897345 isvalid: true payloadsize: 674 magic: 0 
compresscodec: SnappyCompressionCodec crc: 3013732329
offset: 69746066279 position: 464898045 isvalid: true payloadsize: 234 magic: 0 
compresscodec: SnappyCompressionCodec crc: 3286064200
offset: 69746066283 position: 464898305 isvalid: true payloadsize: 486 magic: 0 
compresscodec: SnappyCompressionCodec crc: 747917524
offset: 69746066285 position: 464898817 isvalid: true payloadsize: 342 magic: 0 
compresscodec: SnappyCompressionCodec crc: 4283754786
offset: 69746066286 position: 464899185 isvalid: true payloadsize: 233 magic: 0 
compresscodec: SnappyCompressionCodec crc: 2129213572

> Replicas spuriously deleting all segments in partition
> ------------------------------------------------------
>
>                 Key: KAFKA-2477
>                 URL: https://issues.apache.org/jira/browse/KAFKA-2477
>             Project: Kafka
>          Issue Type: Bug
>    Affects Versions: 0.8.2.1
>            Reporter: Håkon Hitland
>         Attachments: kafka_log.txt
>
>
> We're seeing some strange behaviour in brokers: a replica will sometimes 
> schedule all segments in a partition for deletion, and then immediately start 
> replicating them back, triggering our check for under-replicating topics.
> This happens on average a couple of times a week, for different brokers and 
> topics.
> We have per-topic retention.ms and retention.bytes configuration, the topics 
> where we've seen this happen are hitting the size limit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to