Thank you very much for the response!

> What you describe is exactly what the burst filter is designed for. Can you

> explain why it doesn’t work for you?




IIUC, The Filter and Appender are two independent components.

The difference between our scenario requirements and BurstFilter is as follows:

In our scenario, whether a message is discarded depends on whether the Kafka 
appender’s write rate can support the log generation rate. 

The message is expected to be discarded by the Kafka appender.




In contrast, BurstFilter is only influenced by the filter conditions. 

It has no direct logical dependency on whether the Kafka appender's write rate 
can exceed the log output rate. 

The data discarded by BurstFilter is filtered out because it does not meet the 
threshold conditions of the filter, not because of the Kafka appender.




Please correct me if I'm wrong.

Any comment is appreciated.




Best, 

Yuepeng











At 2025-01-27 21:58:30, "Volkan Yazıcı" <vol...@yazi.ci> wrote:
>What you describe is exactly what the burst filter is designed for. Can you
>explain why it doesn’t work for you?
>
>I will soon write a detailed response on resurrecting Kafka appender.
>
>Op ma 27 jan 2025 om 14:41 schreef Yuepeng Pan <panyuep...@apache.org>
>
>> Thanks Volkan for the comments and help.
>>
>>
>>
>>
>> It sounds like neither of the two methods mentioned above can meet
>>
>> the business scenario requirements:
>>
>> We just want the Kafka appender to discard data only when its output
>>
>> rate is lower than the log production rate.
>>
>>
>>
>>
>> > If you are actively using it, either consider migrating to an
>> alternative,
>>
>> > or step up as a maintainer, please.
>>
>>
>>
>>
>> I'm willing to make some contributions to the Kafka appender to the best
>> of my ability.
>>
>> In addition, I am curious:
>> - if I want to support the feature where the Kafka
>>
>> appender can discard logs when the output rate is lower than the log
>> generation rate,
>>
>> what specifications or rules should I follow to advance this feature?
>> - Is this feature reasonable in the eyes of the community's users and
>> developers?
>>
>>
>>
>>
>> Thank you very much.
>>
>>
>>
>>
>> Best,
>>
>> Yuepeng
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> At 2025-01-27 17:27:03, "Volkan Yazıcı" <vol...@yazi.ci> wrote:
>> >Hello Yuepeng,
>> >
>> >If it is okay to drop log events when the appender isn't keeping up, you
>> >can use a burst filter
>> ><https://logging.apache.org/log4j/2.x/manual/filters.html#BurstFilter>.
>> If
>> >your burst/congestion periods are temporary and you don't want to lose
>> >events, you can consider employing an async. appender
>> ><
>> https://logging.apache.org/log4j/2.x/manual/appenders/delegating.html#AsyncAppender
>> >
>> >as a buffer.
>> >
>> >Note that the Kafka appender
>> ><
>> https://logging.apache.org/log4j/2.x/manual/appenders/message-queue.html#KafkaAppender
>> >
>> >sadly
>> >needs some love. Due to lack of community interest and maintainer time, it
>> >is planned to be dropped in the next major release, i.e., Log4j 3. If you
>> >are actively using it, either consider migrating to an alternative, or
>> step
>> >up as a maintainer, please.
>> >
>> >Kind regards.
>> >
>> >On Sun, Jan 26, 2025 at 12:09 PM Yuepeng Pan <panyuep...@apache.org>
>> wrote:
>> >
>> >> Hi, masters..
>> >>
>> >>
>> >> Recently, I have enabled the Kafka appender in certain scenarios to
>> >> collect logs, but we encountered an issue:
>> >> When the log generation speed exceeds the write speed of Kafka,
>> >>  it negatively impacts the processing speed of core business logic
>> because
>> >> the high-frequency log output is embedded within the core business
>> logic.
>> >>
>> >>
>> >> May I know is there any available parameter for optimizing this issue?
>> >>
>> >>
>> >> Thank you~
>> >>
>> >>
>> >> Best,
>> >> Yuepeng Pan
>>

Reply via email to