Yuepeng, I understand your case – indeed a burst filter won't help you if
you don't know the capacity of the Kafka writer in advance, which I presume
to be unlikely. Have you considered the async. appender
<https://logging.apache.org/log4j/2.x/manual/appenders/delegating.html#AsyncAppender>?
It can be configured to drop to requests if the buffer is full, i.e., Kafka
is not keeping up. For instance,

<Kafka name="KAFKA" ...>
<Async name="KAFKA_ASYNC" blocking="false">
  <AppenderRef ref="KAFKA"/>
</Async>


Regarding breathing life into the Kafka appender, I agree with Piotr: Kafka
project is the ideal place to maintain it. In the Kafka appender, 20% of
the code is complying with Log4j contracts, and 80% is using and managing
Kafka resources. The Kafka team possesses the best expertise to maintain
such a product. I suggest having this discussion with Kafka maintainers.

On Mon, Jan 27, 2025 at 3:26 PM Yuepeng Pan <panyuep...@apache.org> wrote:

> Thank you very much for the response!
>
> > What you describe is exactly what the burst filter is designed for. Can
> you
>
> > explain why it doesn’t work for you?
>
>
>
>
> IIUC, The Filter and Appender are two independent components.
>
> The difference between our scenario requirements and BurstFilter is as
> follows:
>
> In our scenario, whether a message is discarded depends on whether the
> Kafka appender’s write rate can support the log generation rate.
>
> The message is expected to be discarded by the Kafka appender.
>
>
>
>
> In contrast, BurstFilter is only influenced by the filter conditions.
>
> It has no direct logical dependency on whether the Kafka appender's write
> rate can exceed the log output rate.
>
> The data discarded by BurstFilter is filtered out because it does not meet
> the threshold conditions of the filter, not because of the Kafka appender.
>
>
>
>
> Please correct me if I'm wrong.
>
> Any comment is appreciated.
>
>
>
>
> Best,
>
> Yuepeng
>
>
>
>
>
>
>
>
>
>
>
> At 2025-01-27 21:58:30, "Volkan Yazıcı" <vol...@yazi.ci> wrote:
> >What you describe is exactly what the burst filter is designed for. Can
> you
> >explain why it doesn’t work for you?
> >
> >I will soon write a detailed response on resurrecting Kafka appender.
> >
> >Op ma 27 jan 2025 om 14:41 schreef Yuepeng Pan <panyuep...@apache.org>
> >
> >> Thanks Volkan for the comments and help.
> >>
> >>
> >>
> >>
> >> It sounds like neither of the two methods mentioned above can meet
> >>
> >> the business scenario requirements:
> >>
> >> We just want the Kafka appender to discard data only when its output
> >>
> >> rate is lower than the log production rate.
> >>
> >>
> >>
> >>
> >> > If you are actively using it, either consider migrating to an
> >> alternative,
> >>
> >> > or step up as a maintainer, please.
> >>
> >>
> >>
> >>
> >> I'm willing to make some contributions to the Kafka appender to the best
> >> of my ability.
> >>
> >> In addition, I am curious:
> >> - if I want to support the feature where the Kafka
> >>
> >> appender can discard logs when the output rate is lower than the log
> >> generation rate,
> >>
> >> what specifications or rules should I follow to advance this feature?
> >> - Is this feature reasonable in the eyes of the community's users and
> >> developers?
> >>
> >>
> >>
> >>
> >> Thank you very much.
> >>
> >>
> >>
> >>
> >> Best,
> >>
> >> Yuepeng
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >> At 2025-01-27 17:27:03, "Volkan Yazıcı" <vol...@yazi.ci> wrote:
> >> >Hello Yuepeng,
> >> >
> >> >If it is okay to drop log events when the appender isn't keeping up,
> you
> >> >can use a burst filter
> >> ><https://logging.apache.org/log4j/2.x/manual/filters.html#BurstFilter
> >.
> >> If
> >> >your burst/congestion periods are temporary and you don't want to lose
> >> >events, you can consider employing an async. appender
> >> ><
> >>
> https://logging.apache.org/log4j/2.x/manual/appenders/delegating.html#AsyncAppender
> >> >
> >> >as a buffer.
> >> >
> >> >Note that the Kafka appender
> >> ><
> >>
> https://logging.apache.org/log4j/2.x/manual/appenders/message-queue.html#KafkaAppender
> >> >
> >> >sadly
> >> >needs some love. Due to lack of community interest and maintainer
> time, it
> >> >is planned to be dropped in the next major release, i.e., Log4j 3. If
> you
> >> >are actively using it, either consider migrating to an alternative, or
> >> step
> >> >up as a maintainer, please.
> >> >
> >> >Kind regards.
> >> >
> >> >On Sun, Jan 26, 2025 at 12:09 PM Yuepeng Pan <panyuep...@apache.org>
> >> wrote:
> >> >
> >> >> Hi, masters..
> >> >>
> >> >>
> >> >> Recently, I have enabled the Kafka appender in certain scenarios to
> >> >> collect logs, but we encountered an issue:
> >> >> When the log generation speed exceeds the write speed of Kafka,
> >> >>  it negatively impacts the processing speed of core business logic
> >> because
> >> >> the high-frequency log output is embedded within the core business
> >> logic.
> >> >>
> >> >>
> >> >> May I know is there any available parameter for optimizing this
> issue?
> >> >>
> >> >>
> >> >> Thank you~
> >> >>
> >> >>
> >> >> Best,
> >> >> Yuepeng Pan
> >>
>

Reply via email to