Brian Tingle wrote:
Thanks, I eventually found where the job trackers were in the :50030 web
page of the cloudera thing, and I saw it said "10 threads" for each
crawler in the little status update box where it was telling me how far
along each crawl was. I have to say, this whole thing (nutch/hadoop) is
pretty flipping awesome. Great work.
I'm running on aws EC2 us-east and spidering sites that should be hosted
on the CENIC network in California, do you have any suggestions on what
a good number of threads to try per crawler might be in that situation
(I'm guessing it might be hard to saturate the bandwidth)? I'm thinking
I'll bump it up to at least 25.
You need to be careful when running large crawls on someone else's
infrastructure. While the raw bandwidth may be enough, the DNS infra may
be insufficient - both on the side of the target domains as well as the
local resolver. I strongly recommend setting up a local caching DNS.
--
Best regards,
Andrzej Bialecki <><
___. ___ ___ ___ _ _ __________________________________
[__ || __|__/|__||\/| Information Retrieval, Semantic Web
___|||__|| \| || | Embedded Unix, System Integration
http://www.sigram.com Contact: info at sigram dot com