Hi Tom,

> This "latches" stuff is irrelevant to us.

Well, that's good to know anyway, thanks for setting me straight. Maybe
Oracle could take a leaf out of PGs book instead of the other way around. I
recall that you mentioned the caching of the schema before, so even though I
assumed PG was latching the metadata, I had begun to wonder if it was
actually neccessary.

While it7s obviously not as critical as I thought, I think there may still
be some potential for query caching by pg. It would be nice to have the
option anyway, as different applications have different needs.

I think that re-use of SQL in applications (ie controlling the proliferation
of SQL statements that are minor variants of each other) is a good goal for
maintainability, even if it doesn't have a major impact on performance as it
seems you are suggesting in the case of pg. Even complex queries that must
be constructed dynamically typically only have a finite number of options
and can still use bind variables, so in a well tuned system, they should
still be viable candidates for caching (ie, if they aren't being bumped out
of the cache by thousands of little queries not using binds).

I'll just finish by saying that, developing applications in a way that would
take advantage of any query caching still seems like good practice to me,
even if the target DBMS has no query caching. For now, that's what I plan to
do with future PG/Oracle/Hypersonic (my 3 favourite DBMSs) application
development anyway.

Regards
Iain


----- Original Message ----- 
From: "Tom Lane" <[EMAIL PROTECTED]>
To: "Iain" <[EMAIL PROTECTED]>
Cc: "Jim C. Nasby" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Tuesday, September 28, 2004 12:17 PM
Subject: Re: [PERFORM] Caching of Queries


> "Iain" <[EMAIL PROTECTED]> writes:
> > I can only tell you (roughly) how it works wth Oracle,
>
> Which unfortunately has little to do with how it works with Postgres.
> This "latches" stuff is irrelevant to us.
>
> In practice, any repetitive planning in PG is going to be consulting
> catalog rows that it draws from the backend's local catalog caches.
> After the first read of a given catalog row, the backend won't need
> to re-read it unless the associated table has a schema update.  (There
> are some other cases, like a VACUUM FULL of the catalog the rows came
> from, but in practice catalog cache entries don't change often in most
> scenarios.)  We need place only one lock per table referenced in order
> to interlock against schema updates; not one per catalog row used.
>
> The upshot of all this is that any sort of shared plan cache is going to
> create substantially more contention than exists now --- and that's not
> even counting the costs of managing the cache, ie deciding when to throw
> away entries.
>
> A backend-local plan cache would avoid the contention issues, but would
> of course not allow amortizing planning costs across multiple backends.
>
> I'm personally dubious that sharing planning costs is a big deal.
> Simple queries generally don't take that long to plan.  Complicated
> queries do, but I think the reusability odds go down with increasing
> query complexity.
>
> regards, tom lane
>
> ---------------------------(end of broadcast)---------------------------
> TIP 6: Have you searched our list archives?
>
>                http://archives.postgresql.org


---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
    (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])

Reply via email to