for "kind" agents a robots.txt suffice. 
for inconsiderate harvester, usually that kind of work is addressed by 
either the firewall or the webserver.

in web2py either you evaluate the user agent for every request and cut it 
to a "HTTP(500)" in models or you implement your own rate-limiting, that 
needs to be as fast as possible. 

On Thursday, May 9, 2013 8:58:43 PM UTC+2, Alex Glaros wrote:
>
> What techniques can be used in a Web2py site to prevent data mining by 
> harvester bots?
>
> In my day job, if the Oracle database slows down, I go to the Unix OS, see 
> if the same IP address is doing a-lot-faster-than-a-human-could-type 
> queries, and then block that IP address in the firewall.
>
> Are there any ideas that that I could use with a Web2py website?
>
> Thanks,
>
> Alex Glaros
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to