On Nov 1, 2005, at 10:40 AM, Jeff Davidson wrote:
Ken,
Would it be feasible to write to a (SQL) database appender? Later,
you would
simply query the results for each IP address as needed, instead of
forcing
the sorting of log records at the time that they are logged.
I'm very new to log
Hi, there won't be any loss of data (log entries) since it will always
reside in the original log file (composite). The log file manager
process will continue to scan the original log (until told otherwise)
long after the original server has gone down. In fact it need never go
down, if you are go
Hi!
The only thing you have to be careful about with Laz's approach is a
failure of the application (program, process, or platform) while
buffering all those records. The solution is viable only if you don't
care about potential data loss. If data loss is unacceptable, which
might be the case wh
Hi, You also keep with a single logger with entries that specify which
client a particular entry applies to, and build an auxiliary logger
management process that wakes up periodically and picks up where it left
off and farms out the entries to appropriate discrete client files.
Since this is done
Ken
Another suggestion may be:
1. Setup a "Logger" pool... then have a pointer to the Logger in your main
code.
2. At the start of request processing for a client, set the Logger pointer
to the appropriate logger in the pool, if one exists, or create a new one
and save in the pool
3. After a cert
Ken,
Would it be feasible to write to a (SQL) database appender? Later, you would
simply query the results for each IP address as needed, instead of forcing
the sorting of log records at the time that they are logged.
I'm very new to log4cxx so I don't really know how much effort would be
require