On 2014-11-02 14:26:04 +0100, Magnus Hagander wrote:
> I had a discussion with a few people recently about a hack I wrote for
> pg_receivexlog at some point, but never ended up submitting, and in
> cleaning that up realized I had an open item on it.
> 
> The idea is to add a switch to pg_receivexlog (in this case, -a, but
> that can always be bikeshedded ot coursE) that acts somewhat like
> archive_command on the backend. The idea is to have pg_receivexlog
> fire off an external command at the end of each segment - for example
> a command to gzip the file, or to archive it off into a Magic Cloud
> (TM) or something like that.

I can see that to be useful.

> My current hack just fires off this command using sytem(). That will
> block the pg_receivexlog process, obviously. The question is, if we
> want this, what kind of behaviour would we want here? One option is to
> do just that, which should be safe enough for something like gzip but
> might cause trouble if the external command blocks on network for
> example. Another option would be to just fork() and run it in the
> background, which could in theory lead to unlimited number of
> processes if they all hang. Or of course we could have a background
> process that queues them up - much like we do in the main backend,
> which is definitely more complicated.

How about a middleground: Fork the command of, but wait() on it when
before you start the next command?

This will nead some persistent state about the commands success -
similar to the current archive status stuff. Given retries and
everything it might end up to be easier to have a separate process.

Greetings,

Andres Freund

-- 
 Andres Freund                     http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to