[ please trim the quoted material a bit, folks ] Magnus Hagander <mag...@hagander.net> writes: > 2009/9/28 Robert Haas <robertmh...@gmail.com>: >> The problem with having the syslogger send the data directly to an >> external process is that the external process might be unable to >> process the data as fast as syslogger is sending it. I'm not sure >> exactly what will happen in that case, but it will definitely be bad.
This is the same issue already raised with respect to syslog versus syslogger, ie, some people would rather lose log data than have the backends block waiting for it to be written. > That would mean we have to write everything to the file, though, so it > would be rather bad for the case where you want to log "just a little" > but are "delegating" the decision to the external process. And it > would create double the I/O on disk for the logfile (once to the csv > log, once processed by the external process). Robert's design could be made to work without that, if you dump the original log output into a ramdisk and let the external process write whatever it chooses to real disk. If you have a system crash you lose any as-yet-unprocessed log output, but hopefully there usually wouldn't be much. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers