[ 
https://issues.apache.org/jira/browse/NUTCH-963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Markus Jelsma updated NUTCH-963:
--------------------------------

    Attachment: SolrClean.java
                NUTCH-963-command-and-log4j.patch

Here's a patch for the bin/nutch script adding the solrclean command. The patch 
also contains a rule for log4j to send some output of the solrclean command to 
stdout as well as the log file. I also attached the SolrClean class (was 
Solr404Deleter) and added a commit if > 0 deletes were found.

@Claudio:
about the redirection. Nutch doesn't seem to set the values properly. Fetching 
a page (for the _first time_) returning HTTP 301 results in db_redir_temp for 
that CrawlDB entry instead. So something is wrong here that needs a fix first. 
If you fetch a page that was not-modified at first but later returns a HTTP 301 
it holds the db_notmodified status.

> Add support for deleting Solr documents with STATUS_DB_GONE in CrawlDB (404 
> urls)
> ---------------------------------------------------------------------------------
>
>                 Key: NUTCH-963
>                 URL: https://issues.apache.org/jira/browse/NUTCH-963
>             Project: Nutch
>          Issue Type: New Feature
>          Components: indexer
>    Affects Versions: 2.0
>            Reporter: Claudio Martella
>            Assignee: Markus Jelsma
>            Priority: Minor
>             Fix For: 1.3, 2.0
>
>         Attachments: NUTCH-963-command-and-log4j.patch, Solr404Deleter.java, 
> SolrClean.java
>
>
> When issuing recrawls it can happen that certain urls have expired (i.e. URLs 
> that don't exist anymore and return 404).
> This patch creates a new command in the indexer that scans the crawldb 
> looking for these urls and issues delete commands to SOLR.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to