Ah, thank you, that's much closer. I just have one other question.
When I try to compile the SolrClientAdapter.java from the zip file on
the page you mentioned I get errors on the calls to solrDoc.setBoost()
and solrDoc.addField(), and solrDoc.add(). (cannot find symbol)
I've looked around a
now i have this:
[EMAIL PROTECTED] search]$ ./bin/nutch crawl urls -dir crawled -depth 3
crawl started in: crawled
rootUrlDir = urls
threads = 10
depth = 3
Injector: starting
Injector: crawlDb: crawled/crawldb
Injector: urlDir: urls
Injector: Converting injected urls to crawl db entries.
Exceptio
On Feb 12, 2008, at 11:57 AM, Nick Tkach wrote:
Has anyone tried to apply/use the patches to the Nutch trunk from
NUTCH-442? Between that code and the example from Sami's FooFactory
weblog I've been able to at least get things running, but still hit
a snag. When I try to run SolrIndexer.
Has anyone tried to apply/use the patches to the Nutch trunk from
NUTCH-442? Between that code and the example from Sami's FooFactory
weblog I've been able to at least get things running, but still hit a
snag. When I try to run SolrIndexer.java I get an error from the Hadoop
MapTask (via Index
We have had the same problem (I think it is not the partitioning but the
last part of the select which goes wrong). We solved it by turning of
speculative execution.
In hadoop-site.xml:
mapred.speculative.execution
false
If true, then multiple instances of some map and reduce
tasks