Hi
I am seeing few messages getting corrupted in kafka, It is not happening
frequently and percentage is also very very less (less than 0.1%).

Basically i am publishing thrift events in byte array format to kafka
topics(with out encoding like base64), and i also see more events than i
publish (i confirm this by looking at the offset for that topic).
For example if i publish 100 events and i see 110 as offset for that topic
(since it is in production i could not get exact messages which causing
this problem, and we will only realize this problem when we consume because
our thrift deserialization fails).

So my question is, is there any magic byte which actually determines the
boundary of the message which is same as the byte i am sending or or for
any n/w issues messages get chopped and stores as one message to multiple
messages on server side ?

tx
SunilKalva

Reply via email to