[ 
https://issues.apache.org/jira/browse/NUTCH-1067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julien Nioche reopened NUTCH-1067:
----------------------------------


At revision 1170548.

ant clean then ant =>

compile-core:
    [javac] /data/nutch-1.4/build.xml:96: warning: 'includeantruntime' was not 
set, defaulting to build.sysclasspath=last; set to false for repeatable builds
    [javac] Compiling 172 source files to /data/nutch-1.4/build/classes
    [javac] /data/nutch-1.4/src/java/org/apache/nutch/crawl/Crawl.java:136: 
fetch(org.apache.hadoop.fs.Path,int) in org.apache.nutch.fetcher.Fetcher cannot 
be applied to (org.apache.hadoop.fs.Path,int,boolean)
    [javac]       fetcher.fetch(segs[0], threads, 
org.apache.nutch.fetcher.Fetcher.isParsing(getConf()));  // fetch it
    [javac]              ^
    [javac] /data/nutch-1.4/src/java/org/apache/nutch/tools/Benchmark.java:234: 
fetch(org.apache.hadoop.fs.Path,int) in org.apache.nutch.fetcher.Fetcher cannot 
be applied to (org.apache.hadoop.fs.Path,int,boolean)
    [javac]       fetcher.fetch(segs[0], threads, 
org.apache.nutch.fetcher.Fetcher.isParsing(getConf()));  // fetch it
    [javac]              ^
    [javac] Note: Some input files use or override a deprecated API.
    [javac] Note: Recompile with -Xlint:deprecation for details.
    [javac] Note: Some input files use unchecked or unsafe operations.
    [javac] Note: Recompile with -Xlint:unchecked for details.
    [javac] 2 errors

BUILD FAILED


> Configure minimum throughput for fetcher
> ----------------------------------------
>
>                 Key: NUTCH-1067
>                 URL: https://issues.apache.org/jira/browse/NUTCH-1067
>             Project: Nutch
>          Issue Type: New Feature
>          Components: fetcher
>            Reporter: Markus Jelsma
>            Assignee: Markus Jelsma
>            Priority: Minor
>             Fix For: 1.4
>
>         Attachments: NUTCH-1067-1.4-1.patch, NUTCH-1067-1.4-2.patch, 
> NUTCH-1067-1.4-3.patch, NUTCH-1067-1.4-4.patch
>
>
> Large fetches can contain a lot of url's for the same domain. These can be 
> very slow to crawl due to politeness from robots.txt, e.g. 10s per url. If 
> all other url's have been fetched, these queue's can stall the entire 
> fetcher, 60 url's can then take 10 minutes or even more. This can usually be 
> dealt with using the time bomb but the time bomb value is hard to determine.
> This patch adds a fetcher.throughput.threshold setting meaning the minimum 
> number of pages per second before the fetcher gives up. It doesn't use the 
> global number of pages / running time but records the actual pages processed 
> in the previous second. This value is compared with the configured threshold.
> Besides the check the fetcher's status is also updated with the actual number 
> of pages per second and bytes per second.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to