On Saturday, August 28, 2010 08:35:07 am Tom Lane wrote:
> John Fabiani <jo...@jfcomputer.com> writes:
> > I have program that loops though several thousands of records.  The loop
> > contains a select statement that confirms that I don't have a dup and
> > then inserts a record into a different table.  IOW's the statement is
> > checking against a static table.  The size of the table I'm checking
> > does not change during the loop.  I'm sure many have done something
> > similar.
> 
> Are you really, truly making *no* changes to the table you're reading?
> 
> What I'm suspecting is that you are making such changes, in fact lots
> of them.  The number of live rows may not change, but the table is
> bloating with lots of dead row versions, which can't be cleaned up yet
> since the transaction that deleted them is still open.  This leads to
> O(N^2) or worse slowdown.
> 
> There are usually ways around this type of problem, but we'd need more
> details about what you're really doing.
> 
>                       regards, tom lane

I'm not making any changes to the lookup table.  But I did discover the 
performance problem - rather I was helped into the light (used the IRC).  
There is a custom function in my select statement.  It turns out that the 
function was labeled as "VOLATILE" and a simple change to 'STABLE' made all 
the difference in the world.  I was not aware of this setting (I won't forget 
it).  Learn something everyday!  

But your answer has me thinking.  I often do exactly what you are describing 
(of course not for thousands of records).  So how does one get around the 
issues bloat?

Johnf  

Reply via email to