[ 
https://issues.apache.org/jira/browse/NUTCH-2011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14547676#comment-14547676
 ] 

Sebastian Nagel commented on NUTCH-2011:
----------------------------------------

Yes, that's because of the nodeDB feature (the crawl has run before without any 
problems). Of course, a parsing fetcher needs more heap memory, esp., if there 
are huge files to parse. I traced the memory usage of the running fetch 
process: it started with 350 MB and reached 1 MB after 35.000 documents 
fetched. After that the process slowed down, I had to kill it after 39.000 
fetched docs when the fetch speed went down to 0.1 docs / sec. while the 
process used more and more CPU to run the garbage collector.
Roughly, 16kB are required for one FetchNode entry. Seems to be a lot at a 
first glance, but in fact: URL, title, outlinks + anchors take some space 
(including the overhead of Java objects). Also, a hash map <Integer, FetchNode> 
is not the ideal structure to hold a list of consecutively enumerated elements.

> Endpoint to support realtime JSON output from the fetcher
> ---------------------------------------------------------
>
>                 Key: NUTCH-2011
>                 URL: https://issues.apache.org/jira/browse/NUTCH-2011
>             Project: Nutch
>          Issue Type: Sub-task
>          Components: fetcher, REST_api
>            Reporter: Sujen Shah
>            Assignee: Chris A. Mattmann
>              Labels: memex
>             Fix For: 1.11
>
>
> This fix will create an endpoint to query the Nutch REST service and get a 
> real-time JSON response of the current/past Fetched URLs. 
> This endpoint also includes pagination of the output to reduce data transfer 
> bw in large crawls.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to