On Sat, 17 Jan 2004, Randal L. Schwartz wrote: > Perrin> By the way, did you look at Randal's throttle code? (Posted as > Perrin> Stonehenge::Throttle, I believe.) That's what I started with last > Perrin> time I needed to do this. It worked well over NFS for a cluster. > > And I'm thinking about rewriting that using DBD::SQLite for the > tracker, rather than my ad-hoc "use the filesystem as a database" > code from before. > > Then, you could basically program it to block on any criterion you choose, > like CPU seconds consumed in last 5 minutes, bytes transferred, number > of hits, and spread that out by whatever divisions you wanted. It'd > just be a matter of reducing the decision logic to a single number > over a certain domain (requestor, resource, number) and then setting > the blocking criterion (seconds, threshold).
If you put the thing on CPAN, so others could use it for applications, that'd be great. But I can't tell people "go cut and paste this code listing on Randal's site"! The code I wrote isn't really designed so much to throttle requests as to impose quotas, so that you can say "no client can download more than X per day". This is more useful if you're delivering relatively large files (like 200-900k RSS feeds, which I am) and you don't want people downloading it once per hour. It also doesn't require an external script. Cleanup of the internal logs is handled during the cleanup phase. It uses DB_File to store the data. I should probably add locking, or maybe just an option to use BerkeleyDB.pm if available, and use that modules built-in locking. -dave /*======================= House Absolute Consulting www.houseabsolute.com =======================*/ -- Reporting bugs: http://perl.apache.org/bugs/ Mail list info: http://perl.apache.org/maillist/modperl.html