Hi,

On Thu, 10 May 2012 09:10:04 +0300, Tolga <to...@ozses.net> wrote:
Hi,

This will sound like a duplicate, but actually it differs from the
other one. Please bear with me. Following
http://wiki.apache.org/nutch/NutchTutorial, I first issued the command

bin/nutch crawl urls -solr http://localhost:8983/solr/ -depth 3 -topN 5

Then when I got the message

Exception in thread "main" java.io.IOException: Job failed!
    at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1252)
    at

org.apache.nutch.indexer.solr.SolrDeleteDuplicates.dedup(SolrDeleteDuplicates.java:373)
    at

org.apache.nutch.indexer.solr.SolrDeleteDuplicates.dedup(SolrDeleteDuplicates.java:353)
    at org.apache.nutch.crawl.Crawl.run(Crawl.java:153)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
    at org.apache.nutch.crawl.Crawl.main(Crawl.java:55)

Please include the relevant part of the log. This can be a known issue.


I issued the commands

bin/nutch crawl urls -dir crawl -depth 3 -topN 5

and

bin/nutch solrindex http://127.0.0.1:8983/solr/ crawldb -linkdb
crawldb/linkdb crawldb/segments/*

separately, after which I got no errors. When I browsed to
http://localhost:8983/solr/admin and attempted a search, I got the
error


   HTTP ERROR 400

Problem accessing /solr/select. Reason:

    undefined field text

But this is a Solr thing, you have no field named text. Resolve this in Solr or on the Solr mailing list.



------------------------------------------------------------------------
/Powered by Jetty://

/What am I doing wrong?

Regards,/
/

--
Markus Jelsma - CTO - Openindex

Reply via email to