I've never had to realy take robots.txt into 
consideration before, as any site crawling 
I've ever done has been in secluded sites 
that I knew the structure of.
It seems from my understanding that google 
uses a system of robots, the first robot 
checks for robots.txt, makes decisions based 
on what it finds, contacts other robots to 
do realy work. This strikes me as a 
reasonable way to proceed, does anyone have 
any counter recommendations.

Also anyone have a robots.txt to object 
script out there
already?


-- 
To unsubscribe from this list, just send an email to
[EMAIL PROTECTED] with unsubscribe as the subject.

Reply via email to