Hi Thom,

Yeah They can be divided up, but my main issue is that I would like these functions wrapped up so that the client (who has little to no experience using PostgreSQL) can just run a SQL function that will execute all of these updates and prepare many tables and functions for a product. (Essentially SELECT install_the_program() to setup up the DB and build the tables).

However, I keep running into problems because the queries are very time consuming (several days on fast computers with lots of memory) and individual queries seem to require different configuration parameters..

I have a feeling it is all going to boil down to writing a (python) script to build the DB from CLI in Linux. But they really want all the functionality encapsulated in the PostgreSQL server, including this building process.

Cheers,
Tom

On 12/07/2010 14:57, Thom Brown wrote:
On 12 July 2010 14:50, Tom Wilcox<hungry...@gmail.com>  wrote:
Hi Thom,

I am performing update statements that are applied to a single table that is
about 96GB in size. These updates are grouped together in a single
transaction. This transaction runs until the machine runs out of disk space.

What I am trying to achieve is for postgresql to complete this updating
transaction without running out of memory. I assume that this is happening
because for a Rollback to be possible, postgres must at least keep track of
the previous values/changes whilst the transaction is not complete and
committed. I figured this would be the most likely cause for us to run out
of disk space and therefore I would like to reconfigure postgresql not to
hold onto previous copies somehow.

Any suggestions?

Cheers,
Tom

Hi Tom,

Is it not possible to do these updates in batches, or does it have to be atomic?

(A small note about replying.  Please use "reply to all", and on this
mailing list responses should go below.)

Regards

Thom


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to