On Mon, Jan 17, 2011 at 7:11 AM, William L. Thomson Jr. <
[email protected]> wrote:

> On Mon, 2011-01-17 at 09:05 -0500, Chad Bailey wrote:
> > Google's crawler obey's /robots.txt, that's where i'd start.
>
> Yes and I was looking into that, but its not so easy or straight forward
> when dealing with a wiki. For example
>
> http://en.wikipedia.org/wiki/MediaWiki:Robots.txt
> http://meta.wikimedia.org/wiki/MediaWiki:Robots.txt
> http://commons.wikimedia.org/wiki/MediaWiki:Robots.txt
>
> Really not sure where to even begin. Oh and Google is still hitting the
> wiki. Its been going on for days now.....
>

/robots.txt

User-agent: *
Disallow: *

Probably as easy as blocking IPs.  And then, if and when you feel you have
the time, you can refine it.
-- 
I don't wanna change the world
I just wanna leave it colder -- Breaking Benjamin

Reply via email to