This scenario is why I use Flume.  The FlumeAppender will write the event to 
disk locally and then send it to a downstream flume agent. That flume agent 
writes it to its local disk and sends a response back to the Flume Appender 
that it allows it to delete the event from disk. The remote agent then forwards 
the Event to Kafka. This guarantees there is no data loss, although it is 
possible to get duplicates if you have multiple flume agents for high 
availability.

Ralph

> On Mar 20, 2017, at 8:25 AM, Matt Sicker <boa...@gmail.com> wrote:
> 
> Which data were lost? Was it pending log messages that hadn't been sent to
> the KafkaProducer yet, or was it buffered messages inside KafkaProducer?
> You could help debug this by adding the Kafka property "batch.size" set to
> "1" or perhaps "linger.ms" to "5" or so millis.
> 
> On 20 March 2017 at 03:29, Yang Rui <yang_ru...@outlook.com> wrote:
> 
>> Hi,Matt Sicker
>> 
>> I used FailOverAppender, but I found that in the moment kafka down, the
>> data was lost.
>> 
>> What kind of measures can be taken to avoid this situation
>> 
>> The attachment is a configuration file:log4j2.xml
>> 
>> Thanks,
>> 
>> Rui
>> ------------------------------
>> *发件人:* Matt Sicker <boa...@gmail.com>
>> *发送时间:* 2017年3月14日 15:19
>> *收件人:* Log4J Users List
>> *主题:* Re: log4j2 issue
>> 
>> The gist of what you're probably looking for is a failover appender
>> configuration: <
>> https://logging.apache.org/log4j/2.x/manual/appenders.
>> html#FailoverAppender>.
>> This can be used to switch to another appender when one fails which is
>> perfect for networked appenders.
>> 
>> On 14 March 2017 at 07:00, Yang Rui <yang_ru...@outlook.com> wrote:
>> 
>>> Hi,
>>> 
>>> I am Rui from China.
>>> 
>>> We use both of KafkaAppender (with a AsyncAppender wrapper)
>>> and FileAppender of log4j2 with version 2.6.2 in the application.
>>> 
>>> Here is the scenaria, when kafka cluster down and stop
>>> service, the application will slow down and wait for given timeout (
>>> request.timeout.ms)
>>> 
>>> to response finally (The bufferSize of AsyncKafka is reached).
>>> 
>>> I am wondering if there is any solution that the
>>> fileAppender can always work normally without any performance issue which
>>> affected
>>> 
>>> by KafkaAppender. In other words, the KafkaAppender can "
>>> DISCARD" the logs when kafka cluster down while the application
>>> 
>>> can output the logs by FileAppender.
>>> 
>>> 
>>> Thanks,
>>> Rui
>>> 
>>> 
>>> 
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: log4j-user-unsubscr...@logging.apache.org
>>> For additional commands, e-mail: log4j-user-h...@logging.apache.org
>>> 
>> 
>> 
>> 
>> --
>> Matt Sicker <boa...@gmail.com>
>> 
>> 
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: log4j-user-unsubscr...@logging.apache.org
>> For additional commands, e-mail: log4j-user-h...@logging.apache.org
>> 
> 
> 
> 
> -- 
> Matt Sicker <boa...@gmail.com>



---------------------------------------------------------------------
To unsubscribe, e-mail: log4j-user-unsubscr...@logging.apache.org
For additional commands, e-mail: log4j-user-h...@logging.apache.org

Reply via email to