Two weeks ago I disabled google crawler completely by adding
'Disallow: /' to my robots.txt file. This has resulted in a huge
decrease in the volume of traffic as shown by the attached graph.

Previously I had my robots.txt file configured to Disallow everything
else, including /browse as I do have sitemaps in use.

What does googlebot do once its received the sitemap? Does it then
download everything?

Have I got something very wrongly configured here, or do I just accept
that googlebot is our site's most prolific viewer?

Sean
-- 
Sean Carte
esAL Library Systems Manager
+27 72 898 8775
+27 31 373 2490
fax: 0866741254
http://esal.dut.ac.za/

<<attachment: 20110909_ir_eth0.png>>

------------------------------------------------------------------------------
Why Cloud-Based Security and Archiving Make Sense
Osterman Research conducted this study that outlines how and why cloud
computing security and archiving is rapidly being adopted across the IT 
space for its ease of implementation, lower cost, and increased 
reliability. Learn more. http://www.accelacomm.com/jaw/sfnl/114/51425301/
_______________________________________________
DSpace-tech mailing list
DSpace-tech@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dspace-tech

Reply via email to