So right now Google is allowed to spider bugs.debian.org, but other search engines are not. Sounds discriminating.

Perhaps it could be extracted from web server logs to see how much load does the Googlebot make?

If the numbers are not very significant, other spiders could be allowed, couldn't they?


Tomasz Chmielewski
http://blog.wpkg.org



--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]

Reply via email to