Hello Akash,

I am getting below error logs sometimes when a kafka producer is going to
produce some messages to the topic.

*Logs:-*
org.apache.kafka.common.errors.NotLeaderOrFollowerException: For requests
intended only for the leader, this error indicates that the broker is not
the current leader. For requests intended for any replica, this error
indicates that the broker is not a replica of the topic partition

`Error :  org.apache.kafka.common.errors.OutOfOrderSequenceException: The
broker received an out of order sequence number

On Fri, Aug 30, 2024 at 9:04 AM Vikram Singh <vikram.si...@nciportal.com>
wrote:

> Hello Akash,
>
> I will try by changing compression type and test if the issue persists,
> but for now the issue is resolved I guess. I just changed
> replication_factor of the topic to 3. It was like a 3-node kafka cluster
> and replication_factor was configured 2 for the topic.
>
> On Wed, Aug 28, 2024 at 2:55 PM Akash Jain <akashjain0...@gmail.com>
> wrote:
>
>> Also what is the version of Java you are running?
>>
>> On Wed, Aug 28, 2024 at 11:44 AM Akash Jain <akashjain0...@gmail.com>
>> wrote:
>>
>> > Hi Vikram, please share some more details:
>> >
>> >    1. producer version
>> >    2. broker version
>> >    3. broker side config you mentioned is "snappy", can you check what
>> is
>> >    it on topic level as well
>> >    4. do you face this issue when you use any other algorithm instead of
>> >    snappy, say lz4? Try with topic compression.type set to producer and
>> >    producer compression.type to anything other than snappy
>> >    5. Do you face this issue when you disable compression?
>> >    6. You also mention that you are facing this issue randomly, can you
>> >    elaborate more? Like you have different producer versions - and a
>> >    specific producer shows this problem? Or does a specific broker show
>> this
>> >    problem? Have you observed any pattern at all?
>> >
>> > Do you see these exceptions in broker? I guess yes. Broker will
>> decompress
>> > the messages. It is likely because the version of snappy that is used in
>> > producer is 'far away' from the version of snappy in broker.
>> >
>> > On Tue, Aug 27, 2024 at 8:00 AM Vikram Singh
>> > <vikram.si...@nciportal.com.invalid> wrote:
>> >
>> >> Hello Akash,
>> >>
>> >> It's the same on the broker side as well.
>> >>
>> >> *Broker side config:-* compression.type=snappy
>> >>
>> >> On Mon, Aug 26, 2024 at 9:30 PM Akash Jain <akashjain0...@gmail.com>
>> >> wrote:
>> >>
>> >> > And what is it on broker?
>> >> >
>> >> > On Sunday, August 25, 2024, Vikram Singh <vikram.si...@nciportal.com
>> >> > .invalid>
>> >> > wrote:
>> >> >
>> >> > > Hello Akash,
>> >> > >
>> >> > >
>> >> > > Yes, i am using compression type in producer side configuration as
>> >> > > mentioned below,
>> >> > >
>> >> > > And the compression type I am using is *snappy*.
>> >> > >
>> >> > >
>> >> > > *Compression on producer :-*
>> >> > >
>> >> > > producerProperties.put("compression.type",CloudKafkaProducerConfig.
>> >> > > COMPRESSION_TYPE);
>> >> > >
>> >> > > On Fri, Aug 23, 2024 at 5:56 PM Akash Jain <
>> akashjain0...@gmail.com>
>> >> > > wrote:
>> >> > >
>> >> > > > Hi Vikram. Can you share you code snippet? Are you using
>> >> compression on
>> >> > > > producer/broker side?
>> >> > > >
>> >> > > > On Friday, August 23, 2024, Vikram Singh <
>> >> vikram.si...@nciportal.com
>> >> > > > .invalid>
>> >> > > > wrote:
>> >> > > >
>> >> > > > > Hello,
>> >> > > > >
>> >> > > > > I am facing issues while producing messages on kafka topics. I
>> am
>> >> > > facing
>> >> > > > > this issue randomly. Please help me by referring to the logs
>> >> below.
>> >> > > > >
>> >> > > > > Logs :-
>> >> > > > > 1). ERROR [ReplicaManager broker=0] Error processing append
>> >> operation
>> >> > > on
>> >> > > > > partition MHM_CLZ_COM-AWS_123-7 (kafka.server.ReplicaManager)
>> >> > > > >
>> >> > > > > org.apache.kafka.common.KafkaException: Failed to decompress
>> >> record
>> >> > > > stream
>> >> > > > >
>> >> > > > > 2). Caused by: java.io.IOException: FAILED_TO_UNCOMPRESS(5)
>> >> > > > >
>> >> > > >
>> >> > >
>> >> >
>> >>
>> >
>>
>

Reply via email to