AgentM <[EMAIL PROTECTED]> writes:
> On Aug 31, 2006, at 11:18 , [EMAIL PROTECTED] wrote:
>> I'm attempting to understand why prepared statements would be used for
>> long enough for tables to change to a point that a given plan will
>> change from 'optimal' to 'disastrous'.

> Scenario: A web application maintains a pool of connections to the  
> database. If the connections have to be regularly restarted due to a  
> postgres implementation detail (stale plans), then that is a database  
> deficiency.

The two major complaints that I've seen are

* plpgsql's prepared plans don't work at all for scenarios involving
temp tables that are created and dropped in each use of the function.
Then, the plan needs to be regenerated on every successive call.
Right now we tell people they have to use EXECUTE, which is painful
and gives up unnecessary amounts of performance (because it might
well be useful to cache a plan for the lifespan of the table).

* for parameterized queries, a generic plan gives up too much
performance compared to one generated for specific constant parameter
values.

Neither of these problems have anything to do with statistics getting
stale.

                        regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster

Reply via email to