(This question was answered several days ago on this list; please check 
the list archives before posting. I believe it's also in the FAQ.)

> If PostgreSQL is run on a system that has a file size limit (2
> gig?), where  might cause us to hit the limit?

Postgres will never internally use files (e.g. for tables, indexes, 
etc) larger than 1GB -- at that point, the file is split.

However, you might run into problems when you export the data from Pg 
to another source, such as if you pg_dump the contents of a database > 
2GB. In that case, filter pg_dump through gzip or bzip2 to reduce the 
size of the dump. If that's still not enough, you can dump individual 
tables (with -t) or use 'split' to divide the dump into several files.

Cheers,

Neil


---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl

Reply via email to