Tom Lane wrote:
Andrew Dunstan <[EMAIL PROTECTED]> writes:
I should have expressed it better. The idea is to have pg_dump emit the objects in an order that allows the restore to take advantage of sync scans. So sync scans being disabled in pg_dump would not at all matter.

Unless you do something to explicitly parallelize the operations,
how will a different ordering improve matters?

I thought we had a paper design for this, and it involved teaching
pg_restore how to use multiple connections.  In that context it's
entirely up to pg_restore to manage the ordering and ensure dependencies
are met.  So I'm not seeing how it helps to have a different sort rule
at pg_dump time --- it won't really make pg_restore's task any easier.

                        

Well, what actually got me going on this initially was that I got annoyed by having indexes not grouped by table when I dumped out the schema of a database, because it seemed a bit illogical. Then I started thinking about it and it seemed to me that even without synchronised scanning or parallel restoration, we might benefit from building all the indexes of a given table together, especially if the whole table could fit in either our cache or the OS cache.

cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to