fqaiser94 commented on code in PR #13533:
URL: https://github.com/apache/kafka/pull/13533#discussion_r1167581045


##########
streams/src/main/java/org/apache/kafka/streams/kstream/internals/ChangedSerializer.java:
##########
@@ -113,9 +113,9 @@ public byte[] serialize(final String topic, final Headers 
headers, final Change<
                 throw new StreamsException("Both old and new values are not 
null (" + data.oldValue
                         + " : " + data.newValue + ") in ChangeSerializer, 
which is not allowed unless upgrading.");
             } else {
-                final int capacity = UINT32_SIZE + newDataLength + 
oldDataLength + NEW_OLD_FLAG_SIZE;
+                final int capacity = MAX_VARINT_LENGTH + newDataLength + 
oldDataLength + NEW_OLD_FLAG_SIZE;
                 buf = ByteBuffer.allocate(capacity);
-                ByteUtils.writeUnsignedInt(buf, newDataLength);
+                ByteUtils.writeVarint(newDataLength, buf);

Review Comment:
   nit: I think it would be simpler to just use `buf.putInt(newDataLength)` 
which always writes out 4 bytes. This is the approach taken in other similar 
places in the codebase such as 
[`CombinedKeySchema.toBytes`](https://github.com/apache/kafka/blob/9c12e462106343fbc6af5873074d48f98687af39/streams/src/main/java/org/apache/kafka/streams/kstream/internals/foreignkeyjoin/CombinedKeySchema.java#L74),
 
[`ProcessRecordContext.serialize`](https://github.com/apache/kafka/blob/9c12e462106343fbc6af5873074d48f98687af39/streams/src/main/java/org/apache/kafka/streams/processor/internals/ProcessorRecordContext.java#L104),
 and 
[`ProcessorMetadata.serialize`](https://github.com/apache/kafka/blob/9c12e462106343fbc6af5873074d48f98687af39/streams/src/main/java/org/apache/kafka/streams/processor/internals/ProcessorMetadata.java#L78).
 
   
   While it's nice that `ByteUtils.writeVarint` can potentially write out fewer 
bytes, the variability in the number of bytes used to encode an INT makes the 
code more complex as you now have to allocate the max-possible-size array and 
then later copy to a smaller array. IMO the increase in code complexity is not 
worth the size optimization. If we do think the size optimization is worth it, 
please see my other comments for how to make the code a little more readable. 
   
   Either way, functionally speaking both approaches work so I will approve the 
PR regardless. 
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to