Some more interesting information.

The insert statement is issued with a jdbc callback to the postgres
database (because the application requires partial commits...equivalent
of autonomous transactions)

What I noticed was that the writer process when using the jdbc insert
was very active consuming a lot of memory

When I attempted the same insert within pgadmin manually, the writer
process was not on the top's list of processes.

Wonder if the jdbc callback causes Postgres to allocate memory
differently.


-----Original Message-----
From: Tom Lane [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, March 21, 2006 2:38 PM
To: Sriram Dandapani
Cc: pgsql-admin@postgresql.org
Subject: Re: [ADMIN] out of memory error with large insert 

"Sriram Dandapani" <[EMAIL PROTECTED]> writes:
> On a large transaction involving an insert of 8 million rows, after a
> while Postgres complains of an out of memory error.

If there are foreign-key checks involved, try dropping those constraints
and re-creating them afterwards.  Probably faster than retail checks
anyway ...

                        regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
       subscribe-nomail command to [EMAIL PROTECTED] so that your
       message can get through to the mailing list cleanly

Reply via email to