Hi Jeff,

> I am looking to set up a mod_perl handler which keep track of the
> count of requests coming in. Each child process will store this data
> in local memory and after 5-10 minutes have passed, each child process
> will merge its data into a central database, the goal being that each
> child will not have to hit a database for every request.

I agree with the people saying that memcached/Cache::FastMmap or an
in-memory file is probably fast enough to hit on every request.  In
general though, storing and dumping things to a db now and then is not
a bad way to go for non-critical data.

> The problem is --- how do i additionally have each child merge its
> data on a schedule -- that is, without relying only on an incoming
> request to "hit" that specific child process?

You can't.  The nature of apache is that it responds to network
events.  Cleanup handlers are also only going to fire after a request.
 You could rig up a cron to hit your server regularly and if the data
was shared between the children then whatever child picked that up
could write it to the db, but that seems a lot harder than
alternatives already suggested.

> Attempt #2 --- register a Clean Up hook. This doesn't seem to work for
> me because, as i understand so far, assigning a reference to a sub via
> PerlCleanupHandler is not the same as calling the object's method.

You could just store this data in a $My::Global::Counter and read it
from anywhere.  Each child has its own variable storage, so this is
safe.

Second, you should be able to make a cleanup handler call your sub as a method

> - Perhaps each child process will need to use it's own SQLite or similar 
> cache?

SQLite may well be slower than your real database, so I wouldn't do
that without testing.

BTW, how are you configuring a handler to create a $self that lasts
across multiple requests?

- Perrin

Reply via email to