On 7/5/07, Brad Nicholson <[EMAIL PROTECTED]> wrote:
On Wed, 2007-07-04 at 14:31 -0400, Jan Wieck wrote:
> What I see here is that we are trying to come up with a special case
> optimization for mass-deletes. No mass insert or update operations will
> benefit from any of this. Do people do mass deletes that much that we
> really have to worry about them?
>


Yes, there are a few places that I can think of where we would directly
benefit from this.  Trimming data from very active log tables is the
main case.  We also recently had a case where we needed to delete a fair
amount of data from a table quickly to prevent degraded performance in a
front line system.  We ended up dropping the table, deleting and
re-subbing. We were not is a situation where we could have done smaller
batches that slony would have liked.

May I rephrase Jan's question?  Are there any cases where people do
mass deletion that couldn't be solved using existing table
partitioning approaches? In the case of the log files Brad mentions
above, a solution is to use tables partitioned by temporal range, say
on a monthly basis. Once all the data in a given partition has reached
it's retention schedule, simply drop the partition.

This costs some additional EXECUTE DDL scripting, but I think would
solve the problem quite effectively. It also requires no additions or
changes to slony. Finally, it requires the user to be running a more
modern version of PostgreSQL, however if you care about performance,
that seems a reasonable assumption.

Andrew
_______________________________________________
Slony1-general mailing list
[email protected]
http://lists.slony.info/mailman/listinfo/slony1-general

Reply via email to