Hi,

i'm running nutch with hadoop nightly build and everything works fine 
except the dedup job. I'm getting "Lock obtain timed out" all the way in 
DeleteDuplicates.reduce() after calling reader.deleteDocument(value.get()).
I have 4 servers doing job in parallel through hadoop, so it is obvious 
that they can run in such kind of troubles.
What can I do to avoid this problem?

thx

des

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
_______________________________________________
Nutch-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/nutch-general

Reply via email to