ow <[EMAIL PROTECTED]> writes: > My concern though ... wouldn't pgSql server collapse when faced with > transaction spawning across 100M+ records?
The number of records involved really doesn't faze Postgres at all. However the amount of time spent in the transaction could be an issue if there is other activity in other schemas of the same database. As long as the transaction is running none of the deleted or old updated data in any schema of the database can be cleaned up by vacuum as postgres thinks the big transaction "might" need to see it sometime. So if the rest of the database is still active the tables and indexes being updated may grow larger than normal. If it goes on for a _really_ long time they might need a VACUUM FULL at some point to clean them up. -- greg ---------------------------(end of broadcast)--------------------------- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly