Are your IDs (in both tables) a subject of index?
If so, analyze tables and indexes. If not, create an index for each ID - that may help. Post an explain plan of the query as well. I guess there is full scan instead of index scan, thus running over and over 800 rows vs 110mln rows may take a lot of time.

Hope that helps,

Piotr

W dniu 2011-02-10 18:57, Josh pisze:
Hi

I'm trying to do a DELETE FROM on my large table (about 800 million
rows) based on the contents of another, moderately large table (about
110 million rows). The command I'm using is:

DELETE FROM records WHERE id NOT IN (SELECT id FROM unique_records);

This process ran for about two weeks before I decided to stop it -- it
was dragging down the DB server. I can understand long-running
processes, but two weeks seems a bit much even for a big table.

Is this the best way to approach the problem? Is there a better way?

Some background: The server is version 8.3, running nothing but Pg.
The 'records' table has 'id' as its primary key, and one other index
on another column. The table is referenced by just about every other
table in my DB (about 15 other tables) via foreign key constraints,
which I don't want to break (which is why I'm not just recreating the
table rather than deleting rows). Most of the dependent tables have ON
DELETE CASCADE. The 'unique_records' table is a temp table I got via
something like: SELECT DISTINCT (other_column) id INTO unique_records
FROM records


Thanks very much!

Josh Leder



--

--------------------------------------------------------------
"TECHBAZA.PL" Sp. z o.o.
Technologie WEB, eDB&  eCommerce
OddziaƂ Gliwice
ul. Chorzowska 50
44-100 Gliwice
tel. (+4832) 7186081
fax. (+4832) 7003289



--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql

Reply via email to