On Mon, 2011-01-17 at 11:32 -0500, Bryan Price wrote:

> 
> /robots.txt
> 
> User-agent: *
> Disallow: *

Interesting info based on the above, but I wonder, where is the second
version of the standard. :)

"The first version of the Robot Exclusion standard does not mention
anything about the "*" character in the Disallow: statement. Some
crawlers like Googlebot and Slurp recognize strings containing "*",
while MSNbot and Teoma interpret it in different ways."

Below this section
http://en.wikipedia.org/wiki/Robots_exclusion_standard#Sitemap

Which references the following as the source for the above
http://ghita.org/search-engines-dynamic-content-issues.html

-- 
William L. Thomson Jr.
Systems Administrator
Jacksonville Linux Users Group


---------------------------------------------------------------------
Archive      http://marc.info/?l=jaxlug-list&r=1&w=2
RSS Feed     http://www.mail-archive.com/[email protected]/maillist.xml
Unsubscribe  [email protected]

Reply via email to