On 12/04/11 17:38, Paul & Caroline Lewis wrote:
Hi, Thank you Mark and Richard for your replies. Having looked at this it seems a Full Vacuum is the answer, however I'm not sure why. Processing the SQL scripts as originall reported I do get a large table from TestSet1 and a small table from TestSet2. Once a Full vacuum is performed on the large table from TestSet1 its size drops to the same as the small table from TestS2, however adding a full vacuum into the TestSet1 procedure makes it slower to run than TestSet2, very much slower especially on uploading the very large data sets (70 mill rows). This begs the question is TestSet2 very efficient or is it missing something fundamental that a Full Vacuum provides that I'm not realising at the moment.
That's strange - do you see the same behaviour if you swap the order of the data load, i.e. do the ordered data set first, and/or use a different table name for each load? I'm just wondering if you're seeing some kind of database bloat if VACUUM fixes the issue.
ATB, Mark. -- Mark Cave-Ayland - Senior Technical Architect PostgreSQL - PostGIS Sirius Corporation plc - control through freedom http://www.siriusit.co.uk t: +44 870 608 0063 Sirius Labs: http://www.siriusit.co.uk/labs _______________________________________________ postgis-users mailing list postgis-users@postgis.refractions.net http://postgis.refractions.net/mailman/listinfo/postgis-users