On Mon, Sep 28, 2009 at 6:51 AM, Magnus Hagander <mag...@hagander.net> wrote:
>> I think it's better to spool the log messages to files, and then let
>> the external utility read the files.  The external utility can still
>> fall behind, but even if it does the cluster will continue running.
>
> The difficulty there is to make it "live enough". But I guess if it
> implements the same method as tail -f, it would do that - the only
> issue then would be the fact that this would require much more I/O on
> disk.

The I/O issue is a tricky one.  If that's an issue, then maybe a pipe
or socket is a better fit.  But if the pipe fills up, then the logging
collector needs to begin spooling the messages to disk so that the
whole system doesn't pile up on the external log analyzer.  Figuring
out the right design here is a bit tricky.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to