Robert Haas wrote:
On Sat, Oct 30, 2010 at 9:30 PM, Arturas Mazeika <maze...@gmail.com> wrote:
Thanks for the info, this explains a lot.

Yes, I am upgrading from the 32bit version to the 64bit one.

We have pretty large databases  (some over 1 trillion of rows, and some
containing large documents in blobs.) Giving a bit more memory than 4GB
limit to Postgres was what we were long longing for. Postgres was able to
handle large datasets (I suppose it uses something like long long (64bit)
data type in C++) and I hoped naively that Postgres would be able to migrate
from one version to the other without too much trouble.

I tried to pg_dump one of the DBs with large documents. I failed with out of
memory error. I suppose it is rather hard to migrate in my case :-( Any
suggestions?

Yikes, that's not good.  How many tables do you have in your database?
 How many large objects?  Any chance you can coax a stack trace out of
pg_dump?

well the usually problem is that it is fairly easy to get large (several hundred megabytes) large bytea objects into the database but upon retrieval we tend to take up to 3x the size of the object as actual memory consumption which causes us to hit all kind of limits(especially on 32bit boxes). We really need to look into reducing that or putting a more prominent "don't use bytea for anything larger than say 50MByte)




Stefan

--
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Reply via email to