Perrin Harkins wrote:

> Sure, but why waste resources?

Because it's easy? :-)

> > As for the simplicity, having multiple individual custom cron jobs is
> > simpler than one single generic cron job?
> 
> Yes, much simpler, at least for the scheduling and dispatching part.
> Instead of designing database tables to hold timing info on jobs and
> code to check it that is smart enough to remember when it last ran
> and prevent race conditions, you can write a simple crontab with one
> call to wget per job.  The actual implementation of the jobs is
> pretty much identical either way.

I guess two persons "simpler" aren't always the same: I find it easier
laying out a table and querying it than hacking something to fiddle with
my crontab safely.

> Do it whatever way suits you.  I'm just suggesting that you try the lazy
> way if possible.

I could mash the two together: have a single URL trigger that looks up
the tasks to do in a database, but have it schedule it's next run itself
using "at". Whew, there *is* a whole lot more than one way to do it! :-)

> > I was thinking of having a variable with a timestamp of when we last
> > checked the database
> 
> That will have to be some sort of shared memory or file-based thing, since
> you won't be using the same process each time.

I was intending on being lazy and let a bunch of processes find out
there is nothing to do because one of them already did the work. ;-)

Ahh, the joy of wasting resources...

Would anyone be interested in seeing an Apache::Schedule, with an API
similar to the Tcl API in AOLserver?

http://aolserver.com/docs/tcldev/tapi-114.htm
http://aolserver.com/docs/tcldev/tapi-113.htm
http://aolserver.com/docs/tcldev/tapi-115.htm

-- 
"We make rope." -- Rob Gingell on Sun Microsystem's new virtual memory.

Reply via email to