On Thu, Dec 16, 2010 at 12:48 PM, Heikki Linnakangas
<heikki.linnakan...@enterprisedb.com> wrote:
> One more thing: the motivation behind this patch is to allow parallel
> pg_dump in the future, so we should be make sure this patch caters well for
> that.
>
> As soon as we have parallel pg_dump, the next big thing is going to be
> parallel dump of the same table using multiple processes. Perhaps we should
> prepare for that in the directory archive format, by allowing the data of a
> single table to be split into multiple files. That way parallel pg_dump is
> simple, you just split the table in chunks of roughly the same size, say
> 10GB each, and launch a process for each chunk, writing to a separate file.
>
> It should be a quite simple add-on to the current patch, but will make life
> so much easier for parallel pg_dump. It would also be helpful to work around
> file size limitations on some filesystems.

Sounds reasonable.  Are you planning to do this and commit?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to