Do you think you could file a ticket in Jira so we don't lose this in the
mailing lists? <https://issues.apache.org/jira/browse/LOG4J2>

On 20 March 2017 at 21:32, Yang Rui <yang_ru...@outlook.com> wrote:

> Hi, Matt Sicker
>
>
> Sorry for your confusing, here are my detailed test steps:
>
>
>
>
>
>
>
>
> *1. Start an application for sending logs by
> FailoverAppender(primary:KafkaAppender;Failovers:RollingFile) continuously,
> and log4j2 configuration files such as attachments: log4j2.xml   2. after
> about 1 seconds, Kill the kafka process, and then wait for the application
> log to end   3. It is observed that the number(8972) of logs in the log
> file plus the number(984) of logs on kafka is not the same as the
> total(10000) of application log   Note: The 'batch.size' of kafkaProducer
> is set to 0 *
>
>
> I suspect that the log is not sent to KafkaProducer, but I am tracking the
> root cause.
>
> Thanks,
> Rui
>
>
>
> ------------------------------
> *发件人:* Matt Sicker <boa...@gmail.com>
> *发送时间:* 2017年3月20日 15:25
> *收件人:* Log4J Users List
> *主题:* Re: 答复: log4j2 issue
>
> Which data were lost? Was it pending log messages that hadn't been sent to
> the KafkaProducer yet, or was it buffered messages inside KafkaProducer?
> You could help debug this by adding the Kafka property "batch.size" set to
> "1" or perhaps "linger.ms" to "5" or so millis.
>
> On 20 March 2017 at 03:29, Yang Rui <yang_ru...@outlook.com> wrote:
>
> > Hi,Matt Sicker
> >
> > I used FailOverAppender, but I found that in the moment kafka down, the
> > data was lost.
> >
> > What kind of measures can be taken to avoid this situation
> >
> > The attachment is a configuration file:log4j2.xml
> >
> > Thanks,
> >
> > Rui
> > ------------------------------
> > *发件人:* Matt Sicker <boa...@gmail.com>
> > *发送时间:* 2017年3月14日 15:19
> > *收件人:* Log4J Users List
> > *主题:* Re: log4j2 issue
>
> >
> > The gist of what you're probably looking for is a failover appender
> > configuration: <
> > https://logging.apache.org/log4j/2.x/manual/appenders.
> > html#FailoverAppender>.
> > This can be used to switch to another appender when one fails which is
> > perfect for networked appenders.
> >
> > On 14 March 2017 at 07:00, Yang Rui <yang_ru...@outlook.com> wrote:
> >
> > > Hi,
> > >
> > > I am Rui from China.
> > >
> > > We use both of KafkaAppender (with a AsyncAppender wrapper)
> > > and FileAppender of log4j2 with version 2.6.2 in the application.
> > >
> > > Here is the scenaria, when kafka cluster down and stop
> > > service, the application will slow down and wait for given timeout (
> > > request.timeout.ms)
> > >
> > > to response finally (The bufferSize of AsyncKafka is reached).
> > >
> > > I am wondering if there is any solution that the
> > > fileAppender can always work normally without any performance issue
> which
> > > affected
> > >
> > > by KafkaAppender. In other words, the KafkaAppender can "
> > > DISCARD" the logs when kafka cluster down while the application
> > >
> > > can output the logs by FileAppender.
> > >
> > >
> > > Thanks,
> > > Rui
> > >
> > >
> > >
> > > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail: log4j-user-unsubscr...@logging.apache.org
> > > For additional commands, e-mail: log4j-user-h...@logging.apache.org
> > >
> >
> >
> >
> > --
> > Matt Sicker <boa...@gmail.com>
> >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: log4j-user-unsubscr...@logging.apache.org
> > For additional commands, e-mail: log4j-user-h...@logging.apache.org
> >
>
>
>
> --
> Matt Sicker <boa...@gmail.com>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: log4j-user-unsubscr...@logging.apache.org
> For additional commands, e-mail: log4j-user-h...@logging.apache.org
>



-- 
Matt Sicker <boa...@gmail.com>

Reply via email to