Hey! I am currently testing the log4j-flume-ng appender and running into some issues. It seems like whenever log4j appender fails to log an event it causes the disruptor ring buffer to get full which slows down the whole system.
My setup looks more or less like that: process 1: Java app which uses log4j2 (with flume-ng's Avro appender) process 2: local flume-ng which gets the logs on using an Avro source and process them Here are my findings: When Flume (process 2) is up and running, everything actually looks really good. The ring buffer capacity is almost always full and there are no performance issues. The problem starts when I shut down process 2 - I am trying to simulate a case in which this process crashes, as I do not want it to effect process 1. As soon as I shut down flume I start getting exceptions produced by log4j telling me they cannot append the log - so far it makes sense. The thing is, that at the same time I can see that the ring buffer starts to fill up. As long as it's not totally full process's 1 throughput stays the same. The problem gets serious as soon as the buffer reaches full capacity. When that happens the throughput drops in 80% and it does not seem to recover from this state. But, as soon as I restart process 2, things get back to normal pretty quick - the buffer gets emptied, and the throughput climbs back to what it was before. I assume that from some reason fail to append makes the RingBuffer consumer thread much slower. Besides checking why the flume appender preform slower when an exception is thrown (since the appender fails), I wish there a way to discard the log events, instead of what seems as blocking, whenever the buffer reaches its capacity (from whatever reason it might happen), since I don't want it to affect the main application. I didn't find anything in the documentation regarding that (only for the Async appender), so if there is a way it would be greatly appreciated if you can point it out for me. If it is impossible, should I open an enhancement/bug ticket? Thank you, Tzachi
