[
https://issues.apache.org/jira/browse/KAFKA-18199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sean Quah updated KAFKA-18199:
------------------------------
Description:
The code generated for {{write}} looks like:
{code:java}
_writable.writeUnsignedVarint(0);
_writable.writeUnsignedVarint(this.classicMemberMetadata.size(_cache, _version,
_context) + 1);
_writable.writeUnsignedVarint(1);
classicMemberMetadata.write(_writable, _cache, _version, _context);{code}
while the code generated for {{addSize}} looks like:
{code:java}
_size.addBytes(1);
_size.addBytes(1);
int _sizeBeforeStruct = _size.totalSize();
this.classicMemberMetadata.addSize(_size, _cache, _version, _context);
int _structSize = _size.totalSize() - _sizeBeforeStruct;
_size.addBytes(ByteUtils.sizeOfUnsignedVarint(_structSize)); // missing a `+
1`{code}
This becomes a problem when the serialized size of the ClassicMemberMetadata is
exactly 127 bytes, since the varint representations of {{127}} and {{128}} have
different lengths.
In practice this bug causes {{java.nio.BufferOverflowException}} s when using
the new consumer protocol.
was:
The code generated for {{write}} looks like:
{code:java}
_writable.writeUnsignedVarint(0);
_writable.writeUnsignedVarint(this.classicMemberMetadata.size(_cache, _version,
_context) + 1);
_writable.writeUnsignedVarint(1);
classicMemberMetadata.write(_writable, _cache, _version, _context);{code}
while the code generated for {{addSize}} looks like:
{code:java}
_size.addBytes(1);
_size.addBytes(1);
int _sizeBeforeStruct = _size.totalSize();
this.classicMemberMetadata.addSize(_size, _cache, _version, _context);
int _structSize = _size.totalSize() - _sizeBeforeStruct;
_size.addBytes(ByteUtils.sizeOfUnsignedVarint(_structSize)); // missing a `+
1`{code}
This becomes a problem when the serialized size of the ClassicMemberMetadata is
exactly 127 bytes, since the varint representations of {{127}} and {{128}} have
different lengths.
In practice this bug causes {{java.nio.BufferOverflowException}}s when using
the new consumer protocol.
> Incorrect size calculation for
> ConsumerGroupMemberMetadataValue.classicMemberMetadata
> -------------------------------------------------------------------------------------
>
> Key: KAFKA-18199
> URL: https://issues.apache.org/jira/browse/KAFKA-18199
> Project: Kafka
> Issue Type: Bug
> Components: group-coordinator
> Reporter: Sean Quah
> Assignee: Sean Quah
> Priority: Blocker
> Fix For: 4.0.0
>
>
> The code generated for {{write}} looks like:
> {code:java}
> _writable.writeUnsignedVarint(0);
> _writable.writeUnsignedVarint(this.classicMemberMetadata.size(_cache,
> _version, _context) + 1);
> _writable.writeUnsignedVarint(1);
> classicMemberMetadata.write(_writable, _cache, _version, _context);{code}
> while the code generated for {{addSize}} looks like:
> {code:java}
> _size.addBytes(1);
> _size.addBytes(1);
> int _sizeBeforeStruct = _size.totalSize();
> this.classicMemberMetadata.addSize(_size, _cache, _version, _context);
> int _structSize = _size.totalSize() - _sizeBeforeStruct;
> _size.addBytes(ByteUtils.sizeOfUnsignedVarint(_structSize)); // missing a `+
> 1`{code}
> This becomes a problem when the serialized size of the ClassicMemberMetadata
> is exactly 127 bytes, since the varint representations of {{127}} and {{128}}
> have different lengths.
> In practice this bug causes {{java.nio.BufferOverflowException}} s when using
> the new consumer protocol.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)