Tzachi,

The more detail you can give about your use case in the Jira, the better.

You can propose what configuration would look like too.

Feel free to contribute a patch as well.

Jiras with patches usually get more attention.

Gary

On Thu, Jul 9, 2015 at 6:21 PM, Tzachi Ezra <[email protected]> wrote:

>  ​I opened a ticket (LOG4J2-1080) for this feature request. Is there
> anything else I should report regarding the flume appender performance
>  drop when an exception is thrown?
>
>
>  Tzachi.
>
>
>  ------------------------------
> *From:* Remko Popma <[email protected]>
> *Sent:* Thursday, July 09, 2015 3:22 PM
> *To:* Log4J Developers List
> *Subject:* Re: Ring Buffer capacity gets full when Flume Appender fails
> to append logs
>
>  Currently there's no way to drop events when the ringbuffer is full.
> Please open a feature request Jira ticket for this.
>
>  Remko
>
> Sent from my iPhone
>
> On 2015/07/10, at 4:04, Tzachi Ezra <[email protected]> wrote:
>
>  Hey!
>
> I am currently testing the log4j-flume-ng appender and running into some
> issues. It seems like whenever log4j appender fails to log an event it
> causes the disruptor ring buffer to get full which slows down the whole
> system.
>
> My setup looks more or less like that:
> process 1: Java app which uses log4j2 (with flume-ng’s Avro appender)
> process 2: local flume-ng which gets the logs on using an Avro source and
> process them
>
> Here are my findings:
> When Flume (process 2) is up and running, everything actually looks really
> good. The ring buffer capacity is almost always full and there are no
> performance issues.
> The problem starts when I shut down process 2 - I am trying to simulate a
> case in which this process crashes, as I do not want it to effect process
> 1. As soon as I shut down flume I start getting exceptions produced by
> log4j telling me they cannot append the log - so far it makes sense. The
> thing is, that at the same time I can see that the ring buffer starts to
> fill up. As long as it’s not totally full process’s 1 throughput stays the
> same. The problem gets serious as soon as the buffer reaches full capacity.
> When that happens the throughput drops in 80% and it does not seem to
> recover from this state. But, as soon as I restart process 2, things get
> back to normal pretty quick - the buffer gets emptied, and the throughput
> climbs back to what it was before. I assume that from some reason fail to
> append makes the RingBuffer consumer thread much slower.
>
> Besides checking why the flume appender preform slower when an exception
> is thrown (since the appender fails), I wish there a way to discard the log
> events, instead of what seems as blocking, whenever the buffer reaches its
> capacity (from whatever reason it might happen), since I don't want it to
> affect the main application. I didn’t find anything in the documentation
> regarding that (only for the Async appender), so if there is a way it would
> be greatly appreciated if you can point it out for me. If it is impossible,
> should I open an enhancement/bug ticket?
>
> Thank you,
> Tzachi
>
>


-- 
E-Mail: [email protected] | [email protected]
Java Persistence with Hibernate, Second Edition
<http://www.manning.com/bauer3/>
JUnit in Action, Second Edition <http://www.manning.com/tahchiev/>
Spring Batch in Action <http://www.manning.com/templier/>
Blog: http://garygregory.wordpress.com
Home: http://garygregory.com/
Tweet! http://twitter.com/GaryGregory

Reply via email to