[ 
https://issues.apache.org/jira/browse/LOG4J2-1080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15054195#comment-15054195
 ] 

Remko Popma commented on LOG4J2-1080:
-------------------------------------

Can you raise a separate Jira ticket for this and attach an example 
configuration? 

> Drop events when the RingBuffer is full
> ---------------------------------------
>
>                 Key: LOG4J2-1080
>                 URL: https://issues.apache.org/jira/browse/LOG4J2-1080
>             Project: Log4j 2
>          Issue Type: New Feature
>            Reporter: tzachi
>            Assignee: Remko Popma
>             Fix For: 2.5.1
>
>         Attachments: AsyncLogger.dropEvents.patch
>
>
> I am running into performance issue with an appender, in a certain scenario 
> (attached at the bottom), that causes RingBuffer to reach its full capacity. 
> When that happens I can see that my app throughput drops significantly.
> I think it will be really useful to be able to configure the RingBuffer 
> handler to be able to drop events whenever the buffer reaches its capacity, 
> instead of what seems currently as blocking, as I don't want the logging to 
> affect the main application.
> ---------------------------------------------------------------------
> Here is the scenario that led me to this request:
> I am currently testing the log4j-flume-ng appender and running into some 
> issues. It seems like whenever log4j appender fails to log an event it causes 
> the disruptor ring buffer to get full which slows down the whole system.
> My setup looks more or less like that: 
> process 1: Java app which uses log4j2 (with flume-ng’s Avro appender)
> process 2: local flume-ng which gets the logs on using an Avro source and 
> process them 
> Here are my findings:
> When Flume (process 2) is up and running, everything actually looks really 
> good. The ring buffer capacity is almost always full and there are no 
> performance issues. The problem starts when I shut down process 2 - I am 
> trying to simulate a case in which this process crashes, as I do not want it 
> to effect process 1. As soon as I shut down flume I start getting exceptions 
> produced by log4j telling me they cannot append the log - so far it makes 
> sense. The thing is, that at the same time I can see that the ring buffer 
> starts to fill up. As long as it’s not totally full process’s 1 throughput 
> stays the same. The problem gets serious as soon as the buffer reaches full 
> capacity. When that happens the throughput drops in 80% and it does not seem 
> to recover from this state. But, as soon as I restart process 2, things get 
> back to normal pretty quick - the buffer gets emptied, and the throughput 
> climbs back to what it was before. I assume that from some reason a fail to 
> append makes the RingBuffer consumer thread significantly slower.
> Besides checking why the flume appender preform slower when an exception is 
> thrown, I wish I could just discard the log events when the buffer gets full.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: log4j-dev-unsubscr...@logging.apache.org
For additional commands, e-mail: log4j-dev-h...@logging.apache.org

Reply via email to