答复: 答复: log4j2 issue

2017-03-20 Thread Yang Rui
Hi, Matt Sicker Sorry for your confusing, here are my detailed test steps: 1. Start an application for sending logs by FailoverAppender(primary:KafkaAppender;Failovers:RollingFile) continuously, and log4j2 configuration files such as attachments: log4j2.xml 2. after about 1 seconds, Kill

Re: log4j2 issue

2017-03-20 Thread Ralph Goers
This scenario is why I use Flume. The FlumeAppender will write the event to disk locally and then send it to a downstream flume agent. That flume agent writes it to its local disk and sends a response back to the Flume Appender that it allows it to delete the event from disk. The remote agent

Re: 答复: log4j2 issue

2017-03-20 Thread Matt Sicker
Which data were lost? Was it pending log messages that hadn't been sent to the KafkaProducer yet, or was it buffered messages inside KafkaProducer? You could help debug this by adding the Kafka property "batch.size" set to "1" or perhaps "linger.ms" to "5" or so millis. On 20 March 2017 at 03:29,

答复: log4j2 issue

2017-03-20 Thread Yang Rui
Hi,Matt Sicker I used FailOverAppender, but I found that in the moment kafka down, the data was lost. What kind of measures can be taken to avoid this situation The attachment is a configuration file:log4j2.xml Thanks, Rui 发件人: Matt Sicker