On May 28, 2010, at 7:19 PM, Bruce Momjian <br...@momjian.us> wrote:
Jan Wieck wrote:
Reading the entire WAL just to find all COMMIT records, then go back to the origin database to get the actual replication log you're looking for
is simpler and more efficient? I don't think so.

Agreed, but I think I've not explained myself well enough.

I proposed two completely separate ideas; the first one was this:

If you must get commit order, get it from WAL on *origin*, using exact
same code that current WALSender provides, plus some logic to read
through the WAL records and extract commit/aborts. That seems much
simpler than the proposal you outlined and as SR shows, its low latency
as well since commits write to WAL. No need to generate event ticks
either, just use XLogRecPtrs as WALSender already does.

I see no problem with integrating that into core, technically or
philosophically.


Which means that if I want to allow a consumer of that commit order data
to go offline for three days or so to replicate the 5 requested, low
volume tables, the origin needs to hang on to the entire WAL log from
all 100 other high volume tables?

I suggest writing an external tool that strips out what you need that
can be run at any time, rather than creating a new data format and
overhead for this usecase.

That would be FAR more complex, less robust, and less performant - whereas doing what Jan has proposed is pretty straightforward and should have minimal impact on performance - or none when not enabled.

...Robert

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to