[ 
https://issues.apache.org/jira/browse/KAFKA-8154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17725975#comment-17725975
 ] 

keith.paulson commented on KAFKA-8154:
--------------------------------------

[~tmancill] I used almost the same patch as you did, even the hunk at 564 you 
mention in your first point, though I also adjusted the comparison for the 
application buffer size check right after 564

The change at 564 caused problems; I started gettingĀ 
{code:java}
Buffer overflow when available data size (16504) > application buffer size 
(16384)  {code}
Not a +1 issue but much higher.

The conditional involved isĀ 
{code:java}
(appReadBuffer.position() >= currentApplicationBufferSize) {code}
And appReadBuffer is initialized to currentApplicationBufferSize, so this would 
never fire - except for the the 564 ensureCapacity call which will increase it 
to match the netWriteBuffer size.

BC sets app buffer to 1<<14, and net buffer to that plus an offset that varies 
by protocol, but netWriteBuffer will always be bigger than app buffer, and 
trigger that exception in cases of large data blocks.

With that line removed, my half java ssl/half BC-fips cluster has run solid for 
hours.

tl;dr: the 564 hunk needs to be dropped.

> Buffer Overflow exceptions between brokers and with clients
> -----------------------------------------------------------
>
>                 Key: KAFKA-8154
>                 URL: https://issues.apache.org/jira/browse/KAFKA-8154
>             Project: Kafka
>          Issue Type: Bug
>          Components: clients
>    Affects Versions: 2.1.0
>            Reporter: Rajesh Nataraja
>            Priority: Major
>         Attachments: server.properties.txt
>
>
> https://github.com/apache/kafka/pull/6495
> https://github.com/apache/kafka/pull/5785



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to