showuon commented on PR #14434:
URL: https://github.com/apache/kafka/pull/14434#issuecomment-1735496347

   One question I'd like to get your thoughts. This test is doing:
   1. Generate a 2 byte key, and 128 byte value.
   2. Create a record using snappy codec compressed.
   3. Get the size of the record, which is 197.
   4. Add hardcoded 5 for decompression, so, set the max message size to 202.
   5. After decompressed, the record size is 203, throw exception, while before 
bumping snappy version, it'll always be 202.
   
   Here's my question: 
   If the decompressed recodes can be read correctly in the consumer side(I 
haven't tested it, yet), then should we still worry about the additional 1byte 
after decompression?
   
   @divijvaidya @ijuma @jlprat , thoughts?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to