On Mon, 2011-01-17 at 11:32 -0500, Bryan Price wrote:
>
> 
> /robots.txt
> 
> User-agent: *
> Disallow: *
> 
> Probably as easy as blocking IPs.  And then, if and when you feel you
> have the time, you can refine it.

Already in place, but wrong as easy as blocking IP's. Since if I blocked
IPs via iptables, all traffic stops immediately. Of course other ways as
well. The robots.txt file with the above in it has been in place for a
few minutes now. Google is continuing to make new requests and continue
accross the wiki ;)

Hopefully the robots.txt file will kick in. Not sure if that is looked
at with every request from a search engine crawler, or just the initial.
Then again there are no guarantees the robots.txt route will work. Per
this link provided in a previous posting as well.

http://www.mediawiki.org/wiki/Manual:Robots.txt#Problems

-- 
William L. Thomson Jr.
Systems Administrator
Jacksonville Linux Users Group


---------------------------------------------------------------------
Archive      http://marc.info/?l=jaxlug-list&r=1&w=2
RSS Feed     http://www.mail-archive.com/[email protected]/maillist.xml
Unsubscribe  [email protected]

Reply via email to