On 1 Apr 2010, at 12:22, Birgit Laggner wrote:

> Dear list,
> 
> I have some data (big size) and I've written a long function in pl/pgsql
> which processes the data in several steps. At a test run my function
> aborted because of memory exhaustion. My guess is, that everything what
> happens during the function transaction is stored in the memory - until
> it's full...

No, transactions operate on the database just like anything else. Running out 
of memory doing that is not impossible, but you'd need to create some really 
big transactions to reach those limits (I think I've been quoted the limit 
being 2^32 instructions per transaction a looong time ago, just to give you an 
indication).

You're probably running out of memory for another reason. Now to figure out 
why...

I've seen an issue with deferred constraints causing this, for example. Do you 
use deferred constraints?

Another cause that pops up regularly is that people specify too much global 
work_mem for postgres. work_mem gets allocated per connection, so the more 
connections you have the more memory goes to work_mem and other resources may 
receive too little, or you cross what's available.
What are your relevant postgres settings (max connections, work_mem, shared 
mem, etc.) and how much physical, total and shared memory does your server have?

Another possibility is that your function stores large amounts of data in 
variables that are not backed up by database tables. That means all that data 
will be stored in memory, and even if it goes to swap at some point (not sure 
it will, it would seriously hurt performance) there is a limit.
If this is the case, maybe you could use temporary tables to process that data 
instead of trying to do it all in memory.

> So, my idea for solving that problem would be to cut the
> big function into smaller functions. But, I don't want to write 30
> function calls at the end - I would rather like to have one function
> which is calling all these small functions, so I would only have to
> write one sql-query at the end.

Splitting up big functions into smaller functions is always a good idea. That's 
part of general programming paradigms. It won't cut down the size of your 
transaction though.

> What I fear is either, that, if this
> function calls the other functions, everything is only one trancaction
> again and I get memory overflow once more.


If the problem persists, maybe you could post your function somewhere. As it's 
apparently a rather long function, can you strip it down to something that 
still causes it to run out of memory but that will be a bit easier for the 
people on this list to wade through?

Alban Hertroys

--
If you can't see the forest for the trees,
cut the trees and you'll see there is no forest.


!DSPAM:737,4bb4831310417247659380!



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to