On Wed, 2006-07-05 at 20:32 -0700, Stefan Groschupf wrote:
> Crawler & Co. are command line tools.
> The servletcontainer is only used to deliver search results but you  
> can use the servlet that just provides XML.

Ah, excellent. Thanks for letting me avoid reading the manual ;)

> > It would be nice to automatically detect the content "frame" by
> > analyzing the DOM tree of the pages on a site. Is there such a
> feature in Nutch, contributed to, or publicly available in some other
> project?
> 
> I'm not sure clearly understanding your question here.
> Nutch has a html parser plugin that only extract the content from a  
> html page. 

Do you mean all the text in a HTML document, or do you mean the content
area of a HTML document? This is what I mean: news paper X has a static
design with navigation, some ads in text format, et.c. In the middle of
the document is the article. I want to detect the article-area and index
only this information, as all the other information is irrelevant and
more or less reoccurs with the same information in all documents. I
presume it would not be to tough to do based on a HTML DOM.


Reply via email to