Stefan Kaltenbrunner <ste...@kaltenbrunner.cc> writes:
> well the usually problem is that it is fairly easy to get large (several 
> hundred megabytes) large bytea objects into the database but upon 
> retrieval we tend to take up to 3x the size of the object as actual 
> memory consumption which causes us to hit all kind of limits(especially 
> on 32bit boxes).

It occurs to me that one place that might be unnecessarily eating
backend memory during pg_dump is encoding conversion during COPY OUT.
Make sure that pg_dump isn't asking for a conversion to some other
encoding than what the database uses.  I think the default is to avoid
conversion, so this might be a dead end --- but if for instance you
had PGCLIENTENCODING set in the client environment, it could bite you.

                        regards, tom lane

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Reply via email to