[ 
https://issues.apache.org/jira/browse/NUTCH-656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julien Nioche resolved NUTCH-656.
---------------------------------

    Resolution: Fixed

Committed revision 1541883.

Committed with a few minor changes compared to the latest patch (e.g. decides 
which one is the duplicate on the URL length as a last resort)

Thanks for the review and comments

> DeleteDuplicates based on crawlDB only 
> ---------------------------------------
>
>                 Key: NUTCH-656
>                 URL: https://issues.apache.org/jira/browse/NUTCH-656
>             Project: Nutch
>          Issue Type: Wish
>          Components: indexer
>            Reporter: Julien Nioche
>            Assignee: Julien Nioche
>         Attachments: NUTCH-656.patch, NUTCH-656.v2.patch, NUTCH-656.v3.patch
>
>
> The existing dedup functionality relies on Lucene indices and can't be used 
> when the indexing is delegated to SOLR.
> I was wondering whether we could use the information from the crawlDB instead 
> to detect URLs to delete then do the deletions in an indexer-neutral way. As 
> far as I understand the content of the crawlDB contains all the elements we 
> need for dedup, namely :
> * URL 
> * signature
> * fetch time
> * score
> In map-reduce terms we would have two different jobs : 
> * read crawlDB and compare on URLs : keep only most recent element - oldest 
> are stored in a file and will be deleted later
> * read crawlDB and have a map function generating signatures as keys and URL 
> + fetch time +score as value
> * reduce function would depend on which parameter is set (i.e. use signature 
> or score) and would output as list of URLs to delete
> This assumes that we can then use the URLs to identify documents in the 
> indices.
> Any thoughts on this? Am I missing something?
> Julien



--
This message was sent by Atlassian JIRA
(v6.1#6144)

Reply via email to