[ https://issues.apache.org/jira/browse/NUTCH-1067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Markus Jelsma updated NUTCH-1067: --------------------------------- Attachment: NUTCH-1067-1.4-2.patch New patch to enable the check only when the feeder has finished and allows for a configurable number of times to exceed the threshold. There can be a significant number of exceptions due to the return statement used. Probably clearer to clear the queue's first. > Configure minimum throughput for fetcher > ---------------------------------------- > > Key: NUTCH-1067 > URL: https://issues.apache.org/jira/browse/NUTCH-1067 > Project: Nutch > Issue Type: New Feature > Components: fetcher > Reporter: Markus Jelsma > Assignee: Markus Jelsma > Priority: Minor > Fix For: 1.4, 2.0 > > Attachments: NUTCH-1067-1.4-1.patch, NUTCH-1067-1.4-2.patch > > > Large fetches can contain a lot of url's for the same domain. These can be > very slow to crawl due to politeness from robots.txt, e.g. 10s per url. If > all other url's have been fetched, these queue's can stall the entire > fetcher, 60 url's can then take 10 minutes or even more. This can usually be > dealt with using the time bomb but the time bomb value is hard to determine. > This patch adds a fetcher.throughput.threshold setting meaning the minimum > number of pages per second before the fetcher gives up. It doesn't use the > global number of pages / running time but records the actual pages processed > in the previous second. This value is compared with the configured threshold. > Besides the check the fetcher's status is also updated with the actual number > of pages per second and bytes per second. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira