Hi Lewis,

As you siad UpdateDBJob doesn't expect crawlId , but if I give a crawlId
like -crawlId c10  then it doesn't create new 'webpage' and use the
pre-existing 'c10_webpage' table and solrindexjob also successfully inserts
the docs to Solr.

I just wonder how a new nutch user like me could solve these kind of issues
without the community help.

Thanks for the help.
Tony.


On Wed, Jun 26, 2013 at 1:10 AM, Lewis John Mcgibbney <
lewis.mcgibb...@gmail.com> wrote:

> Hi Tony,
>
> On Tue, Jun 25, 2013 at 1:10 AM, Tony Mullins <tonymullins...@gmail.com
> >wrote:
>
> >
> > So what should I do now to run my complete cycle of Nutch2.x jobs and
> > insert my docs to Solr ?
> >
> >
> I'm not using HBase as backend however I know that as per the crawl script,
> the updatedb doesn't use crawlId parameter. Try adding the parameter please
> and see if it works.
> Thanks
> Lewis
>

Reply via email to