Hi,

I have a table with 15M rows. Table is around 5GB on disk.

Clustering the table takes 5 minutes.

A seq scan takes 20 seconds.

I guess clustering is done using a seq scan on the index and then fetching the 
proper rows in the heap.
If that's the case, fetching random rows on disk is the cause of the enormous 
time it takes to cluster the table.

Since I can set work_mem > 5GB. couldn't postgres do something like:

- read the whole table in memory
- access the table in memory instead of the disk when reading the "indexed" data

?

I mean: there's access exclusive lock on the table while clustering, so I don't 
see any problem in doing it... this way you could 

- avoid sorting (which is what is used in the method "create newtable as select 
* from oldtable order by mycol", and can be slow with 15M rows, plus in my case 
uses 8GB of ram...)
- avoid random-reading on disk

Am I missing something or it's just that "hasn't been done yet"?


 




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to