[ 
https://issues.apache.org/jira/browse/NUTCH-2144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15146201#comment-15146201
 ] 

ASF GitHub Bot commented on NUTCH-2144:
---------------------------------------

Github user sebastian-nagel commented on a diff in the pull request:

    https://github.com/apache/nutch/pull/89#discussion_r52833653
  
    --- Diff: src/java/org/apache/nutch/parse/ParseOutputFormat.java ---
    @@ -338,6 +340,9 @@ public static String filterNormalize(String fromUrl, 
String toUrl,
           } catch (MalformedURLException e1) {
             return null; // skip it
           }
    +
    +
    +
           if ("bydomain".equalsIgnoreCase(ignoreExternalLinksMode)) {
             String toDomain = URLUtil.getDomainName(targetURL).toLowerCase();
             if (toDomain == null || !toDomain.equals(origin)) {
    --- End diff --
    
    Shouldn't this case also be covered (db.ignore.external.links == true and 
db.ignore.external.links.mode == byDomain)?


> Plugin to override db.ignore.external to exempt interesting external domain 
> URLs
> --------------------------------------------------------------------------------
>
>                 Key: NUTCH-2144
>                 URL: https://issues.apache.org/jira/browse/NUTCH-2144
>             Project: Nutch
>          Issue Type: New Feature
>          Components: crawldb, fetcher
>            Reporter: Thamme Gowda N
>            Assignee: Chris A. Mattmann
>            Priority: Minor
>             Fix For: 1.12
>
>         Attachments: ignore-exempt.patch, ignore-exempt.patch
>
>
> Create a rule based urlfilter plugin that allows focused crawler 
> (db.ignore.external.links=true) to fetch static resources from external 
> domains.
> The generalized version of this: This plugin should permit interesting URLs 
> from external domains (by overriding db.ignore.external). The interesting 
> urls are decided from a combination of regex and mime-type rules.
> Concrete use case:
>   When using Nutch to crawl images from a set of domains, the crawler needs 
> to fetch all images which may be linked from CDNs and other domains. In this 
> scenario, allowing all external links and then writing hundreds of regular 
> expressions is not feasible for large number of domains.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to