> >> >
> >> > I downloaded both the softwares, however, I am getting error (*solrUrl
> >> is
> >> > not set, indexing will be skipped..*) when I am trying to crawl using
> >> > Cygwin.
> >> >
> >> > Can anyone please help me out to fix this issue ?
> >> > Else any other website suggesting for Apache Nutch and Solr
> integration
> >> > would be greatly helpful.
> >> >
> >> >
> >> >
> >> > Thanks & Regards,
> >> > Serenity
> >> >
> >>
> >
> >
>
techietutorials.blogspot.com/2011/06/how-to-build-and-start-apache-solr.html
>> >
>> >
>> > I downloaded both the softwares, however, I am getting error (*solrUrl
>> is
>> > not set, indexing will be skipped..*) when I am trying to crawl using
>> > Cygwin.
>> >
>> > Can anyone please help me out to fix this issue ?
>> > Else any other website suggesting for Apache Nutch and Solr integration
>> > would be greatly helpful.
>> >
>> >
>> >
>> > Thanks & Regards,
>> > Serenity
>> >
>>
>
>
t; > Cygwin.
> >
> > Can anyone please help me out to fix this issue ?
> > Else any other website suggesting for Apache Nutch and Solr integration
> > would be greatly helpful.
> >
> >
> >
> > Thanks & Regards,
> > Serenity
> >
>
lrUrl is
> not set, indexing will be skipped..*) when I am trying to crawl using
> Cygwin.
>
> Can anyone please help me out to fix this issue ?
> Else any other website suggesting for Apache Nutch and Solr integration
> would be greatly helpful.
>
>
>
> Thanks & Regards,
> Serenity
etting error (*solrUrl is
> not set, indexing will be skipped..*) when I am trying to crawl using
> Cygwin.
>
> Can anyone please help me out to fix this issue ?
> Else any other website suggesting for Apache Nutch and Solr integration
> would be greatly helpful.
>
>
>
> Thanks & Regards,
> Serenity
>
l
I downloaded both the softwares, however, I am getting error (*solrUrl is
not set, indexing will be skipped..*) when I am trying to crawl using
Cygwin.
Can anyone please help me out to fix this issue ?
Else any other website suggesting for Apache Nutch and Solr integration
would be greatly he
t; > va:201)
>> > at
>> > org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:810)
>> >
>> > at
>> > org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:7
>> > 81)
>> > at
>> org.
.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:730)
> >
> > at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1249)
> > at org.apache.nutch.crawl.Injector.inject(Injector.java:217)
> > at org.apache.nutch.crawl.Crawl.main
.mapred.JobClient.submitJobInternal(JobClient.java:7
> > 81)
> > at
> org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:730)
> >
> > at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1249)
> > at org.apache.nutch.crawl.Injector.inject(Injector.java:21
>
> at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1249)
> at org.apache.nutch.crawl.Injector.inject(Injector.java:217)
> at org.apache.nutch.crawl.Crawl.main(Crawl.java:124)
>
>
> --
> View message @
> http://lucene.47206
All,
I have a couple websites that I need to crawl and the following command line
used to work I think. Solr is up and running and everything is fine there
and I can go through and index the site but I really need the results added
to Solr after the crawl. Does anyone have any idea on how to make
11 matches
Mail list logo