Hi,
What would be the best way to perform crawling with two different
user-agents so as to compare the pages (requested with the two different
agents) returned by a server and accept/reject the url (for subseqent
parsing/indexing etc.)?
I believe the Google crawler used to do (still does?) something similar to
prevent spam sites presenting one page to the crawler and a completely
different one to a user accessing the url with a conventional browser.
So, I guess my question is, is this best done with a new http fetcher plugin
and if so how might it best be implemented so as not to double the number of
parsing/indexing operations?
-Ed
-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642
_______________________________________________
Nutch-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/nutch-general