Hi Prashant,

Do you require wiki write access?

Best

Lewis

On Mon, Dec 3, 2012 at 4:08 AM, Prashant Ladha <prashant.la...@gmail.com> wrote:
> I found a possible solution. I ended up modifying the nutch-default.xml
> file to hard code the plugins.folder path.[0]
> If everyone ends up doing the same thing, then we should add it it in
> installation guide.[1]
>
> [0]
> <property>
>   <name>plugin.folders</name>
>   <!-- <value>plugins</value> -->
> *  <value>/home/prashant/workspaceNutchTrunk/trunk/build/plugins</value>*
>   <description>Directories where nutch plugins are located.  Each
>   element may be a relative or absolute path.  If absolute, it is used
>   as is.  If relative, it is searched for on the classpath.</description>
> </property>
>
> This was already discussed in Nutch JIRA portal.
> https://issues.apache.org/jira/browse/NUTCH-937
>
> [1]
> http://wiki.apache.org/nutch/RunNutchInEclipse
>
>
> On Sun, Dec 2, 2012 at 5:47 PM, Prashant Ladha 
> <prashant.la...@gmail.com>wrote:
>
>> Hi Markus.
>> After sending the email, I again went through the instructions on [0]
>> link.
>> Then I saw the instruction to look at hadoop.log file so looking at the
>> log file, I found [1]
>> Are there any native Hadoop libraries that we have to install?
>>
>> I am on Ubuntu 12.1, JDK 1.7 & Trunk Nutch.
>>
>> [0] http://wiki.apache.org/nutch/RunNutchInEclipse
>> [1] attached hadoop.log
>>
>>
>> On Sun, Dec 2, 2012 at 5:45 PM, Markus Jelsma 
>> <markus.jel...@openindex.io>wrote:
>>
>>> hi - Please provide log output and version number.
>>>
>>> -----Original message-----
>>> > From:Prashant Ladha <prashant.la...@gmail.com>
>>> > Sent: Sun 02-Dec-2012 23:37
>>> > To: user@nutch.apache.org
>>> > Subject: Local Trunk Build - java.io.IOException: Job failed!
>>> >
>>> > Hi,
>>> > Earlier, since I was on Windows7 and seeing some exception that nobody
>>> saw
>>> > so I moved to Ubuntu.
>>> > But here, I am seeing the below error message:
>>> > Can you help in finding out what I could be doing wrong?
>>> >
>>> >
>>> > SLF4J: Class path contains multiple SLF4J bindings.
>>> > SLF4J: Found binding in
>>> >
>>> [jar:file:/home/prashant/.ivy2/cache/org.slf4j/slf4j-log4j12/jars/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> > SLF4J: Found binding in
>>> >
>>> [jar:file:/home/prashant/workspaceNutchTrunk/trunk/build/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>> > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>>> > explanation.
>>> > solrUrl is not set, indexing will be skipped...
>>> > crawl started in: crawl
>>> > rootUrlDir = urls
>>> > threads = 10
>>> > depth = 3
>>> > solrUrl=null
>>> > topN = 50
>>> > Injector: starting at 2012-12-02 17:25:13
>>> > Injector: crawlDb: crawl/crawldb
>>> > Injector: urlDir: urls
>>> > Injector: Converting injected urls to crawl db entries.
>>> > Injector: total number of urls rejected by filters: 0
>>> > Injector: total number of urls injected after normalization and
>>> filtering: 0
>>> > Injector: Merging injected urls into crawl db.
>>> > Injector: finished at 2012-12-02 17:25:28, elapsed: 00:00:14
>>> > Generator: starting at 2012-12-02 17:25:28
>>> > Generator: Selecting best-scoring urls due for fetch.
>>> > Generator: filtering: true
>>> > Generator: normalizing: true
>>> > Generator: topN: 50
>>> > Generator: jobtracker is 'local', generating exactly one partition.
>>> > Exception in thread "main" java.io.IOException: Job failed!
>>> > at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1265)
>>> > at org.apache.nutch.crawl.Generator.generate(Generator.java:551)
>>> > at org.apache.nutch.crawl.Generator.generate(Generator.java:456)
>>> > at org.apache.nutch.crawl.Crawl.run(Crawl.java:130)
>>> > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>>> > at org.apache.nutch.crawl.Crawl.main(Crawl.java:55)
>>> >
>>>
>>
>>



-- 
Lewis

Reply via email to