[ 
https://issues.apache.org/jira/browse/KAFKA-5321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16030703#comment-16030703
 ] 

Ismael Juma commented on KAFKA-5321:
------------------------------------

[~hachikuji], this was fixed as part of KAFKA-5316, right?

> MemoryRecords.filterTo can return corrupt data if output buffer is not large 
> enough
> -----------------------------------------------------------------------------------
>
>                 Key: KAFKA-5321
>                 URL: https://issues.apache.org/jira/browse/KAFKA-5321
>             Project: Kafka
>          Issue Type: Bug
>          Components: log
>            Reporter: Jason Gustafson
>            Assignee: Jason Gustafson
>            Priority: Blocker
>             Fix For: 0.11.0.0
>
>
> Due to KAFKA-5316, it is possible for a record set to grow during cleaning 
> and overflow the output buffer allocated for writing. When we reach the 
> record set which is doomed to overflow the buffer, there are two 
> possibilities:
> 1. No records were removed and the original entry is directly appended to the 
> log. This results in the overflow reported in KAFKA-5316.
> 2. Records were removed and a new record set is built. 
> Here we are concerned with the latter case.The problem is that the builder 
> code automatically allocates a new buffer when we reach the end of the 
> existing buffer and does not reset the position in the original buffer. Since 
> {{MemoryRecords.filterTo}} continues using the old buffer, this can lead to 
> data corruption after cleaning (the data left in the overflowed buffer is 
> garbage). 
> Note that this issue could get fixed as part of a general solution 
> KAFKA-5316, but if that seems too risk, we might fix this separately. A 
> simple solution is to make both paths consistent and ensure that we raise an 
> exception.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to