**Theoretically**, a logging event should be a 'fire and forget'
activity from the perspective of the application.  That means that you
should be able to submit the logging event to the 'logging system'
without blocking, and then the 'logging system' should be able to
deliver those logging events to the final destination on its own
thread(s) and at its own speed.  As long as the average volume of
logging does not exceed the capacity of the actual logging mechanism
(i.e.: over the network) then there ought to be no impact whatsoever to
the application (except perhaps for the cpu overhead of the physical
logging thread(s)).

I don't know log4j well enough to say whether it is already doing this
isolation of 'log event submission' vs 'log event processing'.  If it
doesn't, you should be able to have your application submit all logging
requests through a class written by you that puts all requests into a
memory cache, ensuring no blocking.  Then have one or more separate
threads handling the 'log event processing' independently from the rest
of the app.  For safety, you should probably add some code that monitors
memory, disk and network utilization by this approach and notify you
(perhaps by email) when things get backed up - that way you'll detect
any volume-caused issues before they have a chance to impact the app.

If you really wanted to be bulletproof, implement the cache to use
memory first (up to some limit) and then overflow to a local file cache.
That way if your network gets bogged down by issues other than your
app's you would still be able to continue functioning while the network
recovers.

I know that did not answer your immediate question, but it should help
some in making your choices.

bruno

-----Original Message-----
From: Paul Smith [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, September 03, 2008 8:31 PM
To: Log4J Users List
Subject: network logging performance

If anyone out there is using "Logging over the network" of any form
(socket, JMS, Multicast, syslog appenders etc), this topic is for you.  
I'm wondering whether people could comment on their experience setting
up high performance logging of application data over the network.  The
cost of shipping an event over the wire compared with writing it to a
local file is obviously higher, and so one can notice user-visible
impact if the logging volume is high when using network-based logging
(unless one wraps it in AsyncAppenders).

Just curious to hear peoples experiences, strategies, and thoughts.   
Perhaps people can relate to the flow rate they've manage to achieve
under different configurations.

The driver to my question is that for my Apache Lab project (Pinpoint)
I'm building a central logging repository server to allow data mining of
the generated logging events.  In a high performance site one can have
many logging events shipped from multiple threads in, say, a web- app,
being shipped serially over the wire, so slowing down the logging can
slow down the response time.

cheers,

Paul Smith

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to