Hi Everyone,
Has any thought had gone into improving performance with large table deletes? Let me explain our daily operation:
1) Run throughout the day accumulating data in several tables that are replicated for disaster recovery.
2) After the system shuts down, the data is archived.
3) The tables are then cleared out, i.e. delete from table;
Currently, because of delete being triggered on each row, you end up with a ton of individual row deletes that need to propagate. Is there any way to detect a large group of deletes from the sl_log tables and possibly propagate the delete with a range?
Thanks!
--Sean
_______________________________________________ Slony1-general mailing list [email protected] http://gborg.postgresql.org/mailman/listinfo/slony1-general
