A good place to begin would be to look at the log file generated by Hadoop, 
which would be in "log/" directory. The file automatically truncates each day, 
but that shouldn't be a problem.
 
You could parse that after each crawl, just taking the link and excluding all 
the other information.
 
----- Original Message ----
From: djames <[EMAIL PROTECTED]>
To: [email protected]
Sent: Thursday, March 8, 2007 8:10:11 AM
Subject: external host link logging


Hello,

I'm working with nutch since 2 month now, and i'm very happy to see that
this project is so powerfull!!!!!

I need to crawl only a set of given website, so i set the parameter
db.ignore.external.links to false and it works perfectly.
But now i need to create a log file with the list of all links parsed or
fetched leading to external host for a human validation and reinjection in
the crawl db.
I don't now how to begin???

Could someone help me please 

Thanks a lot
-- 
View this message in context: 
http://www.nabble.com/external-host-link-logging-tf3369106.html#a9374136
Sent from the Nutch - User mailing list archive at Nabble.com.
-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
Nutch-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/nutch-general

Reply via email to