Re: 答复: 答复: log4j2 issue

2017-03-21 Thread Matt Sicker
Do you think you could file a ticket in Jira so we don't lose this in the
mailing lists? <https://issues.apache.org/jira/browse/LOG4J2>

On 20 March 2017 at 21:32, Yang Rui <yang_ru...@outlook.com> wrote:

> Hi, Matt Sicker
>
>
> Sorry for your confusing, here are my detailed test steps:
>
>
>
>
>
>
>
>
> *1. Start an application for sending logs by
> FailoverAppender(primary:KafkaAppender;Failovers:RollingFile) continuously,
> and log4j2 configuration files such as attachments: log4j2.xml   2. after
> about 1 seconds, Kill the kafka process, and then wait for the application
> log to end   3. It is observed that the number(8972) of logs in the log
> file plus the number(984) of logs on kafka is not the same as the
> total(1) of application log   Note: The 'batch.size' of kafkaProducer
> is set to 0 *
>
>
> I suspect that the log is not sent to KafkaProducer, but I am tracking the
> root cause.
>
> Thanks,
> Rui
>
>
>
> --
> *发件人:* Matt Sicker <boa...@gmail.com>
> *发送时间:* 2017年3月20日 15:25
> *收件人:* Log4J Users List
> *主题:* Re: 答复: log4j2 issue
>
> Which data were lost? Was it pending log messages that hadn't been sent to
> the KafkaProducer yet, or was it buffered messages inside KafkaProducer?
> You could help debug this by adding the Kafka property "batch.size" set to
> "1" or perhaps "linger.ms" to "5" or so millis.
>
> On 20 March 2017 at 03:29, Yang Rui <yang_ru...@outlook.com> wrote:
>
> > Hi,Matt Sicker
> >
> > I used FailOverAppender, but I found that in the moment kafka down, the
> > data was lost.
> >
> > What kind of measures can be taken to avoid this situation
> >
> > The attachment is a configuration file:log4j2.xml
> >
> > Thanks,
> >
> > Rui
> > --
> > *发件人:* Matt Sicker <boa...@gmail.com>
> > *发送时间:* 2017年3月14日 15:19
> > *收件人:* Log4J Users List
> > *主题:* Re: log4j2 issue
>
> >
> > The gist of what you're probably looking for is a failover appender
> > configuration: <
> > https://logging.apache.org/log4j/2.x/manual/appenders.
> > html#FailoverAppender>.
> > This can be used to switch to another appender when one fails which is
> > perfect for networked appenders.
> >
> > On 14 March 2017 at 07:00, Yang Rui <yang_ru...@outlook.com> wrote:
> >
> > > Hi,
> > >
> > > I am Rui from China.
> > >
> > > We use both of KafkaAppender (with a AsyncAppender wrapper)
> > > and FileAppender of log4j2 with version 2.6.2 in the application.
> > >
> > > Here is the scenaria, when kafka cluster down and stop
> > > service, the application will slow down and wait for given timeout (
> > > request.timeout.ms)
> > >
> > > to response finally (The bufferSize of AsyncKafka is reached).
> > >
> > > I am wondering if there is any solution that the
> > > fileAppender can always work normally without any performance issue
> which
> > > affected
> > >
> > > by KafkaAppender. In other words, the KafkaAppender can "
> > > DISCARD" the logs when kafka cluster down while the application
> > >
> > > can output the logs by FileAppender.
> > >
> > >
> > > Thanks,
> > > Rui
> > >
> > >
> > >
> > > -
> > > To unsubscribe, e-mail: log4j-user-unsubscr...@logging.apache.org
> > > For additional commands, e-mail: log4j-user-h...@logging.apache.org
> > >
> >
> >
> >
> > --
> > Matt Sicker <boa...@gmail.com>
> >
> >
> > -
> > To unsubscribe, e-mail: log4j-user-unsubscr...@logging.apache.org
> > For additional commands, e-mail: log4j-user-h...@logging.apache.org
> >
>
>
>
> --
> Matt Sicker <boa...@gmail.com>
>
>
> -
> To unsubscribe, e-mail: log4j-user-unsubscr...@logging.apache.org
> For additional commands, e-mail: log4j-user-h...@logging.apache.org
>



-- 
Matt Sicker <boa...@gmail.com>


答复: 答复: log4j2 issue

2017-03-20 Thread Yang Rui
Hi, Matt Sicker


Sorry for your confusing, here are my detailed test steps:


1. Start an application for sending logs by 
FailoverAppender(primary:KafkaAppender;Failovers:RollingFile) continuously, and 
log4j2 configuration files such as attachments: log4j2.xml

2. after about 1 seconds, Kill the kafka process, and then wait for the 
application log to end

3. It is observed that the number(8972) of logs in the log file plus the 
number(984) of logs on kafka is not the same as the total(1) of application 
log

Note: The 'batch.size' of kafkaProducer is set to 0


I suspect that the log is not sent to KafkaProducer, but I am tracking the root 
cause.

Thanks,
Rui




发件人: Matt Sicker <boa...@gmail.com>
发送时间: 2017年3月20日 15:25
收件人: Log4J Users List
主题: Re: 答复: log4j2 issue

Which data were lost? Was it pending log messages that hadn't been sent to
the KafkaProducer yet, or was it buffered messages inside KafkaProducer?
You could help debug this by adding the Kafka property "batch.size" set to
"1" or perhaps "linger.ms" to "5" or so millis.

On 20 March 2017 at 03:29, Yang Rui <yang_ru...@outlook.com> wrote:

> Hi,Matt Sicker
>
> I used FailOverAppender, but I found that in the moment kafka down, the
> data was lost.
>
> What kind of measures can be taken to avoid this situation
>
> The attachment is a configuration file:log4j2.xml
>
> Thanks,
>
> Rui
> --
> *发件人:* Matt Sicker <boa...@gmail.com>
> *发送时间:* 2017年3月14日 15:19
> *收件人:* Log4J Users List
> *主题:* Re: log4j2 issue
>
> The gist of what you're probably looking for is a failover appender
> configuration: <
> https://logging.apache.org/log4j/2.x/manual/appenders.
> html#FailoverAppender>.
> This can be used to switch to another appender when one fails which is
> perfect for networked appenders.
>
> On 14 March 2017 at 07:00, Yang Rui <yang_ru...@outlook.com> wrote:
>
> > Hi,
> >
> > I am Rui from China.
> >
> > We use both of KafkaAppender (with a AsyncAppender wrapper)
> > and FileAppender of log4j2 with version 2.6.2 in the application.
> >
> > Here is the scenaria, when kafka cluster down and stop
> > service, the application will slow down and wait for given timeout (
> > request.timeout.ms)
> >
> > to response finally (The bufferSize of AsyncKafka is reached).
> >
> > I am wondering if there is any solution that the
> > fileAppender can always work normally without any performance issue which
> > affected
> >
> > by KafkaAppender. In other words, the KafkaAppender can "
> > DISCARD" the logs when kafka cluster down while the application
> >
> > can output the logs by FileAppender.
> >
> >
> > Thanks,
> > Rui
> >
> >
> >
> > -
> > To unsubscribe, e-mail: log4j-user-unsubscr...@logging.apache.org
> > For additional commands, e-mail: log4j-user-h...@logging.apache.org
> >
>
>
>
> --
> Matt Sicker <boa...@gmail.com>
>
>
> -
> To unsubscribe, e-mail: log4j-user-unsubscr...@logging.apache.org
> For additional commands, e-mail: log4j-user-h...@logging.apache.org
>



--
Matt Sicker <boa...@gmail.com>

	
		ip:port
		/applog/logging
	

	

		
			
			${kafka-servers}
			3000
			3
			0
			0
		

		
			

			
		
 
		
			
			


			
			
		

		
			
		
		
		
	

	
	
		
			
			
		
		
	
	
	


-
To unsubscribe, e-mail: log4j-user-unsubscr...@logging.apache.org
For additional commands, e-mail: log4j-user-h...@logging.apache.org

Re: log4j2 issue

2017-03-20 Thread Ralph Goers
This scenario is why I use Flume.  The FlumeAppender will write the event to 
disk locally and then send it to a downstream flume agent. That flume agent 
writes it to its local disk and sends a response back to the Flume Appender 
that it allows it to delete the event from disk. The remote agent then forwards 
the Event to Kafka. This guarantees there is no data loss, although it is 
possible to get duplicates if you have multiple flume agents for high 
availability.

Ralph

> On Mar 20, 2017, at 8:25 AM, Matt Sicker <boa...@gmail.com> wrote:
> 
> Which data were lost? Was it pending log messages that hadn't been sent to
> the KafkaProducer yet, or was it buffered messages inside KafkaProducer?
> You could help debug this by adding the Kafka property "batch.size" set to
> "1" or perhaps "linger.ms" to "5" or so millis.
> 
> On 20 March 2017 at 03:29, Yang Rui <yang_ru...@outlook.com> wrote:
> 
>> Hi,Matt Sicker
>> 
>> I used FailOverAppender, but I found that in the moment kafka down, the
>> data was lost.
>> 
>> What kind of measures can be taken to avoid this situation
>> 
>> The attachment is a configuration file:log4j2.xml
>> 
>> Thanks,
>> 
>> Rui
>> --
>> *发件人:* Matt Sicker <boa...@gmail.com>
>> *发送时间:* 2017年3月14日 15:19
>> *收件人:* Log4J Users List
>> *主题:* Re: log4j2 issue
>> 
>> The gist of what you're probably looking for is a failover appender
>> configuration: <
>> https://logging.apache.org/log4j/2.x/manual/appenders.
>> html#FailoverAppender>.
>> This can be used to switch to another appender when one fails which is
>> perfect for networked appenders.
>> 
>> On 14 March 2017 at 07:00, Yang Rui <yang_ru...@outlook.com> wrote:
>> 
>>> Hi,
>>> 
>>> I am Rui from China.
>>> 
>>> We use both of KafkaAppender (with a AsyncAppender wrapper)
>>> and FileAppender of log4j2 with version 2.6.2 in the application.
>>> 
>>> Here is the scenaria, when kafka cluster down and stop
>>> service, the application will slow down and wait for given timeout (
>>> request.timeout.ms)
>>> 
>>> to response finally (The bufferSize of AsyncKafka is reached).
>>> 
>>> I am wondering if there is any solution that the
>>> fileAppender can always work normally without any performance issue which
>>> affected
>>> 
>>> by KafkaAppender. In other words, the KafkaAppender can "
>>> DISCARD" the logs when kafka cluster down while the application
>>> 
>>> can output the logs by FileAppender.
>>> 
>>> 
>>> Thanks,
>>> Rui
>>> 
>>> 
>>> 
>>> -
>>> To unsubscribe, e-mail: log4j-user-unsubscr...@logging.apache.org
>>> For additional commands, e-mail: log4j-user-h...@logging.apache.org
>>> 
>> 
>> 
>> 
>> --
>> Matt Sicker <boa...@gmail.com>
>> 
>> 
>> -
>> To unsubscribe, e-mail: log4j-user-unsubscr...@logging.apache.org
>> For additional commands, e-mail: log4j-user-h...@logging.apache.org
>> 
> 
> 
> 
> -- 
> Matt Sicker <boa...@gmail.com>



-
To unsubscribe, e-mail: log4j-user-unsubscr...@logging.apache.org
For additional commands, e-mail: log4j-user-h...@logging.apache.org



Re: 答复: log4j2 issue

2017-03-20 Thread Matt Sicker
Which data were lost? Was it pending log messages that hadn't been sent to
the KafkaProducer yet, or was it buffered messages inside KafkaProducer?
You could help debug this by adding the Kafka property "batch.size" set to
"1" or perhaps "linger.ms" to "5" or so millis.

On 20 March 2017 at 03:29, Yang Rui <yang_ru...@outlook.com> wrote:

> Hi,Matt Sicker
>
> I used FailOverAppender, but I found that in the moment kafka down, the
> data was lost.
>
> What kind of measures can be taken to avoid this situation
>
> The attachment is a configuration file:log4j2.xml
>
> Thanks,
>
> Rui
> --
> *发件人:* Matt Sicker <boa...@gmail.com>
> *发送时间:* 2017年3月14日 15:19
> *收件人:* Log4J Users List
> *主题:* Re: log4j2 issue
>
> The gist of what you're probably looking for is a failover appender
> configuration: <
> https://logging.apache.org/log4j/2.x/manual/appenders.
> html#FailoverAppender>.
> This can be used to switch to another appender when one fails which is
> perfect for networked appenders.
>
> On 14 March 2017 at 07:00, Yang Rui <yang_ru...@outlook.com> wrote:
>
> > Hi,
> >
> > I am Rui from China.
> >
> > We use both of KafkaAppender (with a AsyncAppender wrapper)
> > and FileAppender of log4j2 with version 2.6.2 in the application.
> >
> > Here is the scenaria, when kafka cluster down and stop
> > service, the application will slow down and wait for given timeout (
> > request.timeout.ms)
> >
> > to response finally (The bufferSize of AsyncKafka is reached).
> >
> > I am wondering if there is any solution that the
> > fileAppender can always work normally without any performance issue which
> > affected
> >
> > by KafkaAppender. In other words, the KafkaAppender can "
> > DISCARD" the logs when kafka cluster down while the application
> >
> > can output the logs by FileAppender.
> >
> >
> > Thanks,
> > Rui
> >
> >
> >
> > -
> > To unsubscribe, e-mail: log4j-user-unsubscr...@logging.apache.org
> > For additional commands, e-mail: log4j-user-h...@logging.apache.org
> >
>
>
>
> --
> Matt Sicker <boa...@gmail.com>
>
>
> -
> To unsubscribe, e-mail: log4j-user-unsubscr...@logging.apache.org
> For additional commands, e-mail: log4j-user-h...@logging.apache.org
>



-- 
Matt Sicker <boa...@gmail.com>


答复: log4j2 issue

2017-03-20 Thread Yang Rui
Hi,Matt Sicker

I used FailOverAppender, but I found that in the moment kafka down, the data 
was lost.

What kind of measures can be taken to avoid this situation

The attachment is a configuration file:log4j2.xml


Thanks,

Rui


发件人: Matt Sicker <boa...@gmail.com>
发送时间: 2017年3月14日 15:19
收件人: Log4J Users List
主题: Re: log4j2 issue

The gist of what you're probably looking for is a failover appender
configuration: <
https://logging.apache.org/log4j/2.x/manual/appenders.html#FailoverAppender>.
This can be used to switch to another appender when one fails which is
perfect for networked appenders.

On 14 March 2017 at 07:00, Yang Rui <yang_ru...@outlook.com> wrote:

> Hi,
>
> I am Rui from China.
>
> We use both of KafkaAppender (with a AsyncAppender wrapper)
> and FileAppender of log4j2 with version 2.6.2 in the application.
>
> Here is the scenaria, when kafka cluster down and stop
> service, the application will slow down and wait for given timeout (
> request.timeout.ms)
>
> to response finally (The bufferSize of AsyncKafka is reached).
>
> I am wondering if there is any solution that the
> fileAppender can always work normally without any performance issue which
> affected
>
> by KafkaAppender. In other words, the KafkaAppender can "
> DISCARD" the logs when kafka cluster down while the application
>
> can output the logs by FileAppender.
>
>
> Thanks,
> Rui
>
>
>
> -
> To unsubscribe, e-mail: log4j-user-unsubscr...@logging.apache.org
> For additional commands, e-mail: log4j-user-h...@logging.apache.org
>



--
Matt Sicker <boa...@gmail.com>

	
		ip:port
		/applog/logging
	

	

		
			
			${kafka-servers}
			3000
			3
			0
			0
		

		
			

			
		
 
		
			
			


			
			
		

		
			
		
		
		
	

	
	
		
			
			
		
		
	
	
	


-
To unsubscribe, e-mail: log4j-user-unsubscr...@logging.apache.org
For additional commands, e-mail: log4j-user-h...@logging.apache.org

Re: log4j2 issue

2017-03-17 Thread Matt Sicker
If you don't care about old log messages that haven't been published yet
between times of Kafka availability, then yeah, discarding old messages
like that is an interesting workaround.

On 17 March 2017 at 08:58, Mikael Ståldal  wrote:

> Have you tried to set blocking="false" on the AsyncAppender you have around
> KafkaAppender?
>
> Have you tried using the system properties log4j2.AsyncQueueFullPolicy and
> log4j2.DiscardThreshold?
> https://logging.apache.org/log4j/2.x/manual/configuration.html#log4j2.
> AsyncQueueFullPolicy
>
> On Tue, Mar 14, 2017 at 1:00 PM, Yang Rui  wrote:
>
> > Hi,
> >
> > I am Rui from China.
> >
> > We use both of KafkaAppender (with a AsyncAppender wrapper)
> > and FileAppender of log4j2 with version 2.6.2 in the application.
> >
> > Here is the scenaria, when kafka cluster down and stop
> > service, the application will slow down and wait for given timeout (
> > request.timeout.ms)
> >
> > to response finally (The bufferSize of AsyncKafka is reached).
> >
> > I am wondering if there is any solution that the
> > fileAppender can always work normally without any performance issue which
> > affected
> >
> > by KafkaAppender. In other words, the KafkaAppender can "
> > DISCARD" the logs when kafka cluster down while the application
> >
> > can output the logs by FileAppender.
> >
> >
> > Thanks,
> > Rui
> >
> >
> >
> > -
> > To unsubscribe, e-mail: log4j-user-unsubscr...@logging.apache.org
> > For additional commands, e-mail: log4j-user-h...@logging.apache.org
> >
>
>
>
> --
> [image: MagineTV]
>
> *Mikael Ståldal*
> Senior software developer
>
> *Magine TV*
> mikael.stal...@magine.com
> Grev Turegatan 3  | 114 46 Stockholm, Sweden  |   www.magine.com
>
> Privileged and/or Confidential Information may be contained in this
> message. If you are not the addressee indicated in this message
> (or responsible for delivery of the message to such a person), you may not
> copy or deliver this message to anyone. In such case,
> you should destroy this message and kindly notify the sender by reply
> email.
>



-- 
Matt Sicker 


Re: log4j2 issue

2017-03-17 Thread Mikael Ståldal
Have you tried to set blocking="false" on the AsyncAppender you have around
KafkaAppender?

Have you tried using the system properties log4j2.AsyncQueueFullPolicy and
log4j2.DiscardThreshold?
https://logging.apache.org/log4j/2.x/manual/configuration.html#log4j2.AsyncQueueFullPolicy

On Tue, Mar 14, 2017 at 1:00 PM, Yang Rui  wrote:

> Hi,
>
> I am Rui from China.
>
> We use both of KafkaAppender (with a AsyncAppender wrapper)
> and FileAppender of log4j2 with version 2.6.2 in the application.
>
> Here is the scenaria, when kafka cluster down and stop
> service, the application will slow down and wait for given timeout (
> request.timeout.ms)
>
> to response finally (The bufferSize of AsyncKafka is reached).
>
> I am wondering if there is any solution that the
> fileAppender can always work normally without any performance issue which
> affected
>
> by KafkaAppender. In other words, the KafkaAppender can "
> DISCARD" the logs when kafka cluster down while the application
>
> can output the logs by FileAppender.
>
>
> Thanks,
> Rui
>
>
>
> -
> To unsubscribe, e-mail: log4j-user-unsubscr...@logging.apache.org
> For additional commands, e-mail: log4j-user-h...@logging.apache.org
>



-- 
[image: MagineTV]

*Mikael Ståldal*
Senior software developer

*Magine TV*
mikael.stal...@magine.com
Grev Turegatan 3  | 114 46 Stockholm, Sweden  |   www.magine.com

Privileged and/or Confidential Information may be contained in this
message. If you are not the addressee indicated in this message
(or responsible for delivery of the message to such a person), you may not
copy or deliver this message to anyone. In such case,
you should destroy this message and kindly notify the sender by reply
email.


Re: log4j2 issue

2017-03-14 Thread Matt Sicker
The gist of what you're probably looking for is a failover appender
configuration: <
https://logging.apache.org/log4j/2.x/manual/appenders.html#FailoverAppender>.
This can be used to switch to another appender when one fails which is
perfect for networked appenders.

On 14 March 2017 at 07:00, Yang Rui  wrote:

> Hi,
>
> I am Rui from China.
>
> We use both of KafkaAppender (with a AsyncAppender wrapper)
> and FileAppender of log4j2 with version 2.6.2 in the application.
>
> Here is the scenaria, when kafka cluster down and stop
> service, the application will slow down and wait for given timeout (
> request.timeout.ms)
>
> to response finally (The bufferSize of AsyncKafka is reached).
>
> I am wondering if there is any solution that the
> fileAppender can always work normally without any performance issue which
> affected
>
> by KafkaAppender. In other words, the KafkaAppender can "
> DISCARD" the logs when kafka cluster down while the application
>
> can output the logs by FileAppender.
>
>
> Thanks,
> Rui
>
>
>
> -
> To unsubscribe, e-mail: log4j-user-unsubscr...@logging.apache.org
> For additional commands, e-mail: log4j-user-h...@logging.apache.org
>



-- 
Matt Sicker 


log4j2 issue

2017-03-14 Thread Yang Rui
Hi,

I am Rui from China.

We use both of KafkaAppender (with a AsyncAppender wrapper) and FileAppender of 
log4j2 with version 2.6.2 in the application.

Here is the scenaria, when kafka cluster down and stop service, the application 
will slow down and wait for given timeout (request.timeout.ms)

to response finally (The bufferSize of AsyncKafka is reached).

I am wondering if there is any solution that the fileAppender can always work 
normally without any performance issue which affected

by KafkaAppender. In other words, the KafkaAppender can "DISCARD" the logs when 
kafka cluster down while the application

can output the logs by FileAppender.


Thanks,
Rui



xx.xxx.xxx.xxx:
/applog/logging






${kafka-servers}
1000
3
0
0

		
		
			
		
   
   

   
 




		



		



	
		 
		
		
	


-
To unsubscribe, e-mail: log4j-user-unsubscr...@logging.apache.org
For additional commands, e-mail: log4j-user-h...@logging.apache.org