Re: [GENERAL] Query meltdown: caching results

2008-02-27 Thread Norman Peelman

Gordon wrote:

(Sorry for the repost but I thought this would be appropriate to both
groups. I did tell Google to delete my first post but odds are some
guys got that copy already anyway)

After a lot of hairpulling, I finally found a mechanism in PHP for
doing what I wanted.  I just had to know 2 things:

1) How to get the PDO engine to use my customized prepared statement
class instead of PDOStatement
2) Extending PDOStatement to transparently add results caching is too
difficult and complex in the timeframe required

Once I knew these things I made a PDOStatement extension class that
instead of trying to transparently add caching to the existing methods
added a couple of new ones instead.  Code below:

 resultCache [$hash])
{
$this -> misses++;
// Execute the query
if ($this -> execute ($params))
{
// Cache the results
$this -> resultCache [$hash] = $this -

fetchAll

(PDO::FETCH_ASSOC);
}
}
else
{
$this -> hits++;
}
return ($this -> resultCache [$hash]);
}
public function tell (array $params = array ())
// Execute a prepared statement that causes the database to be
modified
{
// Execute the query
if ($this -> execute ($params))
{
$rowCount   = $this -> rowCount ();
if ($rowCount)
{
// Tell the parent Database object to
clear statement caches
$this -> database -> clearResults ();
}
return ($rowCount);
}
}
public function clearResults ()
// Clear cache
{
$this -> resultCache = array ();
}
private function __construct ($db)
// Class constructor
{
$this -> database= $db;
//print_r ($this);
}

}

class Database extends PDO
// Adds some extra functionality to the built in PDO class
{
private $statementCache = array ();
private $txCount= 0;
private $txErr  = false;

// Prepared statement cacheing
public function prepare ($statement, array $options = array
())
{
$hash   = md5 ($statement);
if ((!isset ($this -> statementCache [$hash]))
|| (!is_object ($this -> statementCache [$hash])))
{
//echo ('Preparing statement "'.
$statement .'"');
$this -> statementCache [$hash] =
parent::prepare ($statement,
$options);
}
else
{
//echo ('Statement "' . $statement . '"
already prepared');
}
return ($this -> statementCache [$hash]);
}
public function clearResults ()
// Clear the results cache of all associated prepared
statements
{
foreach ($this -> statementCache as $thisStatement)
{
$thisStatement -> clearResults ();
}
}
// Add support for transaction nesting
public function beginTransaction ()
{
if (($this -> txCount == 0) && (!$this -> txErr))
{
$result = parent::beginTransaction ();
}
$this -> txCount ++;
if (DEBUG_TX)
{
echo ('begin: ' . $this -> txCount . '
transaction(s)');
}
return ($result);
}
public function commit ()
{
$this -> txCount --;
if ($this -> txCount <= 0)
{
$this -> txErr?  $result =
parent::rollback ():  $result =
parent::commit ();
$this -> txErr   = false;
}
if (DEBUG_TX)
{
echo ('commit: ' . $this -> txCount . '
transaction(s)');
}
return ($result);
}
public function rollback ()
{
$this -> txErr = true;
$this -> txCount --;
if ($this -> txCount <= 0)
{
$result = parent::rollback ();
$this -> txErr   = false;
}
if (DEBUG_TX)
{
echo ('rollback: ' . $this -> txCount . '
transaction(s)');
}
return ($result);
}
// Housekee

Re: [GENERAL] Query meltdown: caching results

2008-02-27 Thread Gordon
(Sorry for the repost but I thought this would be appropriate to both
groups. I did tell Google to delete my first post but odds are some
guys got that copy already anyway)

After a lot of hairpulling, I finally found a mechanism in PHP for
doing what I wanted.  I just had to know 2 things:

1) How to get the PDO engine to use my customized prepared statement
class instead of PDOStatement
2) Extending PDOStatement to transparently add results caching is too
difficult and complex in the timeframe required

Once I knew these things I made a PDOStatement extension class that
instead of trying to transparently add caching to the existing methods
added a couple of new ones instead.  Code below:

 resultCache [$hash])
{
$this -> misses++;
// Execute the query
if ($this -> execute ($params))
{
// Cache the results
$this -> resultCache [$hash] = $this -
> fetchAll
(PDO::FETCH_ASSOC);
}
}
else
{
$this -> hits++;
}
return ($this -> resultCache [$hash]);
}
public function tell (array $params = array ())
// Execute a prepared statement that causes the database to be
modified
{
// Execute the query
if ($this -> execute ($params))
{
$rowCount   = $this -> rowCount ();
if ($rowCount)
{
// Tell the parent Database object to
clear statement caches
$this -> database -> clearResults ();
}
return ($rowCount);
}
}
public function clearResults ()
// Clear cache
{
$this -> resultCache = array ();
}
private function __construct ($db)
// Class constructor
{
$this -> database= $db;
//print_r ($this);
}

}

class Database extends PDO
// Adds some extra functionality to the built in PDO class
{
private $statementCache = array ();
private $txCount= 0;
private $txErr  = false;

// Prepared statement cacheing
public function prepare ($statement, array $options = array
())
{
$hash   = md5 ($statement);
if ((!isset ($this -> statementCache [$hash]))
|| (!is_object ($this -> statementCache [$hash])))
{
//echo ('Preparing statement "'.
$statement .'"');
$this -> statementCache [$hash] =
parent::prepare ($statement,
$options);
}
else
{
//echo ('Statement "' . $statement . '"
already prepared');
}
return ($this -> statementCache [$hash]);
}
public function clearResults ()
// Clear the results cache of all associated prepared
statements
{
foreach ($this -> statementCache as $thisStatement)
{
$thisStatement -> clearResults ();
}
}
// Add support for transaction nesting
public function beginTransaction ()
{
if (($this -> txCount == 0) && (!$this -> txErr))
{
$result = parent::beginTransaction ();
}
$this -> txCount ++;
if (DEBUG_TX)
{
echo ('begin: ' . $this -> txCount . '
transaction(s)');
}
return ($result);
}
public function commit ()
{
$this -> txCount --;
if ($this -> txCount <= 0)
{
$this -> txErr?  $result =
parent::rollback ():  $result =
parent::commit ();
$this -> txErr   = false;
}
if (DEBUG_TX)
{
echo ('commit: ' . $this -> txCount . '
transaction(s)');
}
return ($result);
}
public function rollback ()
{
$this -> txErr = true;
$this -> txCount --;
if ($this -> txCount <= 0)
{
$result = parent::rollback ();
$this -> txErr   = false;
}
if (DEBUG_TX)
{
echo ('rollback: ' . $this -> txCount . '
transaction(s)');
}
return ($result);
}
// Housekeeping
pri

Re: [GENERAL] Query meltdown: caching results

2008-02-27 Thread Gordon
On Feb 27, 10:37 am, Gordon <[EMAIL PROTECTED]> wrote:
> On Feb 26, 5:26 pm, [EMAIL PROTECTED] (Gregory Stark) wrote:
>
>
>
> > "Norman Peelman" <[EMAIL PROTECTED]> writes:
> > >> My options are, as far as I can tell,
>
> > >> 1) replace the Database PDO extending class with something else that
> > >> provides query results caching in PHP, or
> > >> 2) get Postgres itself to cache the results of queries to avoid
> > >> running them repeatedly during a session.
>
> > You might consider looking at memcached. One way to use it would be to have
> > the PHP application check for the cached object first and use it rather than
> > do any database queries. Then you can use pgmemcached to allow triggers to
> > invalidate cached objects whenever the underlying data changes. (Or you 
> > could
> > even just use pl/php to update or invalidate the cached object through the
> > same code library)
>
> > --
> >   Gregory Stark
> >   EnterpriseDB  http://www.enterprisedb.com
> >   Ask me about EnterpriseDB's Slony Replication support!
>
> > ---(end of broadcast)---
> > TIP 6: explain analyze is your friend
>
> Thanks for the replies, but the problem is really centered around how
> my script interacts with the database.  I know what the problem is,
> and if I was using a structured approach I could figure out
> workarounds fairly quickly, but I chose to do this project in OOP for
> a few reasons (the old CMS was hacked together in a hurry by an
> employee who is now long gone, is horribly written and completely
> unmaintainable, the size of the project seemed to warrant an OOP
> approach, we recently upgraded to PHP 5 and I wanted to take advantage
> of the new features, PDO and prepared statements appeared from the
> documentation to offer major security and performance enhancements,
> and I wanted to increase my experience with OOP development as most
> work I've done before now has been structured) and the high level of
> compartmentalization that OOP demands is coming back to haunt me
> now.
>
> The problem comes when a document is published or unpublished.  All
> the documents that relate to that document need to be republished to
> add or remove the link to the document in question.  When the document
> is published or unpublished the script gets related documents (at the
> moment other documents that share the same folder, but this may change
> in the future to cover more related content) and republishes them.
>
> Each document has associated data, such as the ID of the template to
> apply, its parent, its path back to root, etc, that are generated by
> querying the database in various ways.  For example, the route back to
> path is fetched by iterativly getting the parent folder, getting that
> folder's parent, etc until the script hits the route.  Templates are
> fetched by looking ath the template ID associated with the document.
> If this is 0, then the script goes to the parent folder and gets the
> template associated with the folder.  If that is 0 as well then it
> iterativly goes up until it finds a template to apply or until it hits
> the root, in which case it applies a default template.  The code
> fragment from the script that does this looks like this:
>
> $db -> beginTransaction ();
> if ($doc= CmsItem::factory ('CmsDoc', intval ($_GET 
> ['itm_id']),
> $db, $user))
> {
> if ((($doc  -> itemProps ['itm_publish']) && ($doc   -> 
> unpublish ()))
> || ($doc-> publish ()))
> {
> // Republish related documents
> foreach ($doc -> getSiblingObjects () as $thisDoc)
> {
> if ((get_class ($thisDoc)   == 'CmsDoc')
> && ($thisDoc-> itemProps ['itm_publish']))
> {
> $thisDoc-> republish ();
> }
> }
> // Item status changed
> $db -> commit ();
> $_SESSION ['messageStack'][]= ($doc -> itemProps 
> ['itm_publish']?
> 'Item published':
> 'Item unpublished');
> }
> else
> {
> // Couldn't change published status
> $db -> rollback ();
> $_SESSION ['messageStack'][]= ($doc -> itemProps 
> ['itm_publish']?
> 'Unable to unpublish item':
> 'Unable to publish item');
> }
>
> }
>
> GetSiblingObjects () runs a query that gets a list of IDs that share
> the same parent as the current document.  It then iterates the list
> and spawns a new CMS item for each item in the list and returns them
> as an array.  As folders could be returned as well as documents we
> only run republish () on those items.
>
> CmsDoc -> publish () and CmsDoc -> unpublish () toggle a boolean
> column in 

Re: [GENERAL] Query meltdown: caching results

2008-02-27 Thread paul rivers

Gordon wrote:

On Feb 26, 5:26 pm, [EMAIL PROTECTED] (Gregory Stark) wrote:
  

"Norman Peelman" <[EMAIL PROTECTED]> writes:


My options are, as far as I can tell,

1) replace the Database PDO extending class with something else that

provides query results caching in PHP, or
2) get Postgres itself to cache the results of queries to avoid
running them repeatedly during a session.


You might consider looking at memcached. One way to use it would be to have
the PHP application check for the cached object first and use it rather than
do any database queries. Then you can use pgmemcached to allow triggers to
invalidate cached objects whenever the underlying data changes. (Or you could
even just use pl/php to update or invalidate the cached object through the
same code library)

--
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com
  Ask me about EnterpriseDB's Slony Replication support!

---(end of broadcast)---
TIP 6: explain analyze is your friend


The problem is that there is a vast number of redundant queries being
run.    the principle that objects should not depend on a knowledge
of the inner workings of unrelated objects.   Results caching would 
eliminate the problem of
the same queries beign run over and over  
The problem is the mechanics of actually implementing this caching.

I'm using prepared statements almost exclusivly throughout the design,
meaning that the PDOStatement class probably needs to be extended
somehow and my Database prepare()  I can't have been the first person to 
run up against this problem
  


With memcached, your methods to retrieve data go from "get data from db" 
to "get data from cache, and on cache miss get from db and leave a copy 
for the next guy in cache".   Updating the data is not much more 
complicated.  I don't see why this doesn't work for you?  It won't 
compromise anything on the encapsulation front you are concerned about, 
and you can still use your prepared statements for hitting the db, etc.?


Regards,
Paul




---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Query meltdown: caching results

2008-02-27 Thread Gordon
On Feb 26, 5:26 pm, [EMAIL PROTECTED] (Gregory Stark) wrote:
> "Norman Peelman" <[EMAIL PROTECTED]> writes:
> >> My options are, as far as I can tell,
>
> >> 1) replace the Database PDO extending class with something else that
> >> provides query results caching in PHP, or
> >> 2) get Postgres itself to cache the results of queries to avoid
> >> running them repeatedly during a session.
>
> You might consider looking at memcached. One way to use it would be to have
> the PHP application check for the cached object first and use it rather than
> do any database queries. Then you can use pgmemcached to allow triggers to
> invalidate cached objects whenever the underlying data changes. (Or you could
> even just use pl/php to update or invalidate the cached object through the
> same code library)
>
> --
>   Gregory Stark
>   EnterpriseDB  http://www.enterprisedb.com
>   Ask me about EnterpriseDB's Slony Replication support!
>
> ---(end of broadcast)---
> TIP 6: explain analyze is your friend

Thanks for the replies, but the problem is really centered around how
my script interacts with the database.  I know what the problem is,
and if I was using a structured approach I could figure out
workarounds fairly quickly, but I chose to do this project in OOP for
a few reasons (the old CMS was hacked together in a hurry by an
employee who is now long gone, is horribly written and completely
unmaintainable, the size of the project seemed to warrant an OOP
approach, we recently upgraded to PHP 5 and I wanted to take advantage
of the new features, PDO and prepared statements appeared from the
documentation to offer major security and performance enhancements,
and I wanted to increase my experience with OOP development as most
work I've done before now has been structured) and the high level of
compartmentalization that OOP demands is coming back to haunt me
now.

The problem comes when a document is published or unpublished.  All
the documents that relate to that document need to be republished to
add or remove the link to the document in question.  When the document
is published or unpublished the script gets related documents (at the
moment other documents that share the same folder, but this may change
in the future to cover more related content) and republishes them.

Each document has associated data, such as the ID of the template to
apply, its parent, its path back to root, etc, that are generated by
querying the database in various ways.  For example, the route back to
path is fetched by iterativly getting the parent folder, getting that
folder's parent, etc until the script hits the route.  Templates are
fetched by looking ath the template ID associated with the document.
If this is 0, then the script goes to the parent folder and gets the
template associated with the folder.  If that is 0 as well then it
iterativly goes up until it finds a template to apply or until it hits
the root, in which case it applies a default template.  The code
fragment from the script that does this looks like this:

$db -> beginTransaction ();
if ($doc= CmsItem::factory ('CmsDoc', intval ($_GET ['itm_id']),
$db, $user))
{
if ((($doc  -> itemProps ['itm_publish']) && ($doc  -> unpublish 
()))
|| ($doc-> publish ()))
{
// Republish related documents
foreach ($doc -> getSiblingObjects () as $thisDoc)
{
if ((get_class ($thisDoc)   == 'CmsDoc')
&& ($thisDoc-> itemProps ['itm_publish']))
{
$thisDoc-> republish ();
}
}
// Item status changed
$db -> commit ();
$_SESSION ['messageStack'][]= ($doc -> itemProps 
['itm_publish']?
'Item published':
'Item unpublished');
}
else
{
// Couldn't change published status
$db -> rollback ();
$_SESSION ['messageStack'][]= ($doc -> itemProps 
['itm_publish']?
'Unable to unpublish item':
'Unable to publish item');
}
}

GetSiblingObjects () runs a query that gets a list of IDs that share
the same parent as the current document.  It then iterates the list
and spawns a new CMS item for each item in the list and returns them
as an array.  As folders could be returned as well as documents we
only run republish () on those items.

CmsDoc -> publish () and CmsDoc -> unpublish () toggle a boolean
column in the database between true and false for the item being (un)
published.  unpublish () also deletes the concrete file associated
with the DB entry.

publish () and republish () write out a concrete HTML file based on
the content stored in the table for the document 

Re: [GENERAL] Query meltdown: caching results

2008-02-26 Thread Gregory Stark
"Norman Peelman" <[EMAIL PROTECTED]> writes:

>> My options are, as far as I can tell,
>>
>> 1) replace the Database PDO extending class with something else that
>> provides query results caching in PHP, or
>> 2) get Postgres itself to cache the results of queries to avoid
>> running them repeatedly during a session.

You might consider looking at memcached. One way to use it would be to have
the PHP application check for the cached object first and use it rather than
do any database queries. Then you can use pgmemcached to allow triggers to
invalidate cached objects whenever the underlying data changes. (Or you could
even just use pl/php to update or invalidate the cached object through the
same code library)

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com
  Ask me about EnterpriseDB's Slony Replication support!

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Query meltdown: caching results

2008-02-26 Thread Gordon
On Feb 26, 11:11 am, Gordon <[EMAIL PROTECTED]> wrote:
> I'm working on a CMS that, in addition to the database-stored version
> of articles for easy searching, sorting, etc, also stores a HTML file
> version so pages can be fetched with the minimum of overhead (browsing
> articles has no more overhead than accessing any other HTML file on
> the server).
>
> As I've been trying to keep the system modular I've taken to using an
> OO approach to the server side scripts, which are all written in PHP 5
> and use PDO for database access.  I've also been using prepared
> sequences almost exclusively for security and performance reasons.
> I've tried to wrap sequences of queries in transactions as well, to
> eliminate the "every query is its own transaction" overhead.
>
> With previous projects which I wrote using structured programming
> methods it was quite easy to hold caches of results and keep database
> queries to a minimum, but I've found this extremely difficult to pull
> off when using the OO approach, and now it's starting to have some
> real performance consequences.  The biggest one comes when publishing
> a document that has siblings.  CMS content is organized in a tree with
> folders, subfolders and documents.  A document can be published, where
> both a HTML and database copy exist, or unpublished, where only the
> database version exists, thus denying visitors to the site access to
> it.  Documents in a folder get a sidebar with links to the other
> documents in the same folder, and when you change the published status
> of a document then all the other documents that are also published in
> that folder have to be republished in order to update their
> sidebars.
>
> This means fetching a list of all the documents with the same parent
> and that have a published flag status of true, using the text stored
> in the database to generate the HTML page and saving it to disk.
> Documents have an associated template, which also has to be fetched
> from the database.  And all documents have data such as their path,
> which is a chain of the document's parents back to the root so that
> things like breadcrumbs can be generated.
>
> In the structured approach I'd have just cached stuff like the trail
> back to the root as I know it'll be the same for all documents, so I'd
> only have to run the sequences of queries to get the full trail once.
> But as each instance of a document is independent of all the others
> doing things like this is proving really difficult.
>
> I need to find a way of not running queries that I don't need to,
> either in the PHP script, or in the Postgres database.  What I need is
> for a result set to be cached somewhere, either by Postgres or PHP, so
> when it sees the same query again in a given session it just returns
> the previously fetched result set.  The cache also needs to be able to
> disregard its cached result sets when an event that changes a table
> occurs (insert, update, delete, etc).
>
> On the PHP side I've written a simple Database class that extends PDO
> and that I use in its place.  It's a simple class that basically I use
> to allow me to nest calls to beginTransaction(), commit () and
> rollback () (It only starts an actual transaction of a counter is 0.
> Otherwide it just increments the counter.  Commit only actually
> commits when the counter is 1, and decrements it otherwise.  Rollback
> sets an error flag and decrements the counter, and only rolls back
> when the counter is 1.  If the error flag is set then commit will
> actually roll back instead.  )
>
> My options are, as far as I can tell,
>
> 1) replace the Database PDO extending class with something else that
> provides query results caching in PHP, or
> 2) get Postgres itself to cache the results of queries to avoid
> running them repeatedly during a session.
>
> I seem to remember MySQL providing some kind of results caching, can
> Postgres do the same?Has anyone else run into similar problems and
> how did they overcome them?

I have an idea for how to do it but I'm not quite sure how to
accomplish it fully.  Aspects involving modifications to the tables
are going to be particularly problematic.

My idea is to extend the PDOStatement class with an internal result
cache.  I'm already caching PDOStatements in order to prevent the
script from trying to prepare the same queries over and over again.
The cache will be an array.  The execute(), fetch(). fetchall() etc
methods will be aware of the array and return values from it if
possible.

Things risk getting really tricky really quickly, however.  If a
modification is made to a table, then any or all of the cached data in
all the PDOStatements may no longer be valid and will need to be
flushed.  This is leading me to suspect that this is a far from ideal
way of doing things.

I know that Postgres can cache query plans, but what about results?
Can/do they get cached too?

---(end of broadcast)---
TIP 

Re: [GENERAL] Query meltdown: caching results

2008-02-26 Thread Norman Peelman

Gordon wrote:

I'm working on a CMS that, in addition to the database-stored version
of articles for easy searching, sorting, etc, also stores a HTML file
version so pages can be fetched with the minimum of overhead (browsing
articles has no more overhead than accessing any other HTML file on
the server).

As I've been trying to keep the system modular I've taken to using an
OO approach to the server side scripts, which are all written in PHP 5
and use PDO for database access.  I've also been using prepared
sequences almost exclusively for security and performance reasons.
I've tried to wrap sequences of queries in transactions as well, to
eliminate the "every query is its own transaction" overhead.

With previous projects which I wrote using structured programming
methods it was quite easy to hold caches of results and keep database
queries to a minimum, but I've found this extremely difficult to pull
off when using the OO approach, and now it's starting to have some
real performance consequences.  The biggest one comes when publishing
a document that has siblings.  CMS content is organized in a tree with
folders, subfolders and documents.  A document can be published, where
both a HTML and database copy exist, or unpublished, where only the
database version exists, thus denying visitors to the site access to
it.  Documents in a folder get a sidebar with links to the other
documents in the same folder, and when you change the published status
of a document then all the other documents that are also published in
that folder have to be republished in order to update their
sidebars.

This means fetching a list of all the documents with the same parent
and that have a published flag status of true, using the text stored
in the database to generate the HTML page and saving it to disk.
Documents have an associated template, which also has to be fetched
from the database.  And all documents have data such as their path,
which is a chain of the document's parents back to the root so that
things like breadcrumbs can be generated.

In the structured approach I'd have just cached stuff like the trail
back to the root as I know it'll be the same for all documents, so I'd
only have to run the sequences of queries to get the full trail once.
But as each instance of a document is independent of all the others
doing things like this is proving really difficult.

I need to find a way of not running queries that I don't need to,
either in the PHP script, or in the Postgres database.  What I need is
for a result set to be cached somewhere, either by Postgres or PHP, so
when it sees the same query again in a given session it just returns
the previously fetched result set.  The cache also needs to be able to
disregard its cached result sets when an event that changes a table
occurs (insert, update, delete, etc).

On the PHP side I've written a simple Database class that extends PDO
and that I use in its place.  It's a simple class that basically I use
to allow me to nest calls to beginTransaction(), commit () and
rollback () (It only starts an actual transaction of a counter is 0.
Otherwide it just increments the counter.  Commit only actually
commits when the counter is 1, and decrements it otherwise.  Rollback
sets an error flag and decrements the counter, and only rolls back
when the counter is 1.  If the error flag is set then commit will
actually roll back instead.  )

My options are, as far as I can tell,

1) replace the Database PDO extending class with something else that
provides query results caching in PHP, or
2) get Postgres itself to cache the results of queries to avoid
running them repeatedly during a session.

I seem to remember MySQL providing some kind of results caching, can
Postgres do the same?Has anyone else run into similar problems and
how did they overcome them?



  Don't know about Postgres but yes MySQL does have caching. You could 
also take a look at APC (Alternative PHP Cache) depending on your setup. 
Very easy to use. And very easy to monitor what's actually going on with 
your pages in the cache. Once a page is generated you can store it in 
the cache and give it a time to live before going back to the db.



--
Norman
Registered Linux user #461062

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] Query meltdown: caching results

2008-02-26 Thread Richard Huxton

Gordon wrote:

I need to find a way of not running queries that I don't need to,
either in the PHP script, or in the Postgres database.  What I need is
for a result set to be cached somewhere, either by Postgres or PHP, so
when it sees the same query again in a given session it just returns
the previously fetched result set.  The cache also needs to be able to
disregard its cached result sets when an event that changes a table
occurs (insert, update, delete, etc).


It's the second part that's fiddly (in the general case) if you do it in 
 PHP.


If you're looking for a large-scale cache then memcached would suit your 
needs. There's an add-on for PG that can keep it notigi

  http://pgfoundry.org/projects/pgmemcache/


On the PHP side I've written a simple Database class that extends PDO
and that I use in its place.  It's a simple class that basically I use
to allow me to nest calls to beginTransaction(), commit () and
rollback () (It only starts an actual transaction of a counter is 0.
Otherwide it just increments the counter.  Commit only actually
commits when the counter is 1, and decrements it otherwise.  Rollback
sets an error flag and decrements the counter, and only rolls back
when the counter is 1.  If the error flag is set then commit will
actually roll back instead.  )

My options are, as far as I can tell,

1) replace the Database PDO extending class with something else that
provides query results caching in PHP, or


There are a whole bunch of Pear classes for caching - Cache_Lite is 
simple to plug into an existing structure.



2) get Postgres itself to cache the results of queries to avoid
running them repeatedly during a session.

>

I seem to remember MySQL providing some kind of results caching, can
Postgres do the same?Has anyone else run into similar problems and
how did they overcome them?


No, but if you're serious about the caching you'll want to do it well 
above the data-access layer.


The main gains I've seen with a simple caching system have been:
 1. Big, static lookup lists (countries, catalogue sections etc).
 2. Whole pages / sections of pages
The trick with both is to cache as close to rendering as possible. So, 
the HTML in the case of pages/controls.


Make sure your data-access layer invalidates any relevant cache entries 
and you'll be fine (as long as you don't do any database manipulation 
outside your app - always have an "invalidate whole cache" function / 
script available for this).


Oh, and *do* make sure you've identified real gains first. It's 
distressing to spend two days optimising your caching only to realise 
you've gained 2% because you've missed the real bottle-neck.


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] Query meltdown: caching results

2008-02-26 Thread Gordon
I'm working on a CMS that, in addition to the database-stored version
of articles for easy searching, sorting, etc, also stores a HTML file
version so pages can be fetched with the minimum of overhead (browsing
articles has no more overhead than accessing any other HTML file on
the server).

As I've been trying to keep the system modular I've taken to using an
OO approach to the server side scripts, which are all written in PHP 5
and use PDO for database access.  I've also been using prepared
sequences almost exclusively for security and performance reasons.
I've tried to wrap sequences of queries in transactions as well, to
eliminate the "every query is its own transaction" overhead.

With previous projects which I wrote using structured programming
methods it was quite easy to hold caches of results and keep database
queries to a minimum, but I've found this extremely difficult to pull
off when using the OO approach, and now it's starting to have some
real performance consequences.  The biggest one comes when publishing
a document that has siblings.  CMS content is organized in a tree with
folders, subfolders and documents.  A document can be published, where
both a HTML and database copy exist, or unpublished, where only the
database version exists, thus denying visitors to the site access to
it.  Documents in a folder get a sidebar with links to the other
documents in the same folder, and when you change the published status
of a document then all the other documents that are also published in
that folder have to be republished in order to update their
sidebars.

This means fetching a list of all the documents with the same parent
and that have a published flag status of true, using the text stored
in the database to generate the HTML page and saving it to disk.
Documents have an associated template, which also has to be fetched
from the database.  And all documents have data such as their path,
which is a chain of the document's parents back to the root so that
things like breadcrumbs can be generated.

In the structured approach I'd have just cached stuff like the trail
back to the root as I know it'll be the same for all documents, so I'd
only have to run the sequences of queries to get the full trail once.
But as each instance of a document is independent of all the others
doing things like this is proving really difficult.

I need to find a way of not running queries that I don't need to,
either in the PHP script, or in the Postgres database.  What I need is
for a result set to be cached somewhere, either by Postgres or PHP, so
when it sees the same query again in a given session it just returns
the previously fetched result set.  The cache also needs to be able to
disregard its cached result sets when an event that changes a table
occurs (insert, update, delete, etc).

On the PHP side I've written a simple Database class that extends PDO
and that I use in its place.  It's a simple class that basically I use
to allow me to nest calls to beginTransaction(), commit () and
rollback () (It only starts an actual transaction of a counter is 0.
Otherwide it just increments the counter.  Commit only actually
commits when the counter is 1, and decrements it otherwise.  Rollback
sets an error flag and decrements the counter, and only rolls back
when the counter is 1.  If the error flag is set then commit will
actually roll back instead.  )

My options are, as far as I can tell,

1) replace the Database PDO extending class with something else that
provides query results caching in PHP, or
2) get Postgres itself to cache the results of queries to avoid
running them repeatedly during a session.

I seem to remember MySQL providing some kind of results caching, can
Postgres do the same?Has anyone else run into similar problems and
how did they overcome them?


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings