Hi All,

I have followed following steps for debugging the hadoop core in single node
version. Since, i want to check my system in distributed settings -- could
anyone please help me to debug code when we run the hadoop jar in
distributed settings ?

thanks,
--Pramod

On Wed, Jul 14, 2010 at 12:07 AM, Pramy Bhats <pramybh...@googlemail.com>wrote:

> Hi,
>
> I am trying to debug the new built hadoop-core-dev.jar in Eclipse. To
> simplify the debug process, firstly I setup the Hadoop in single-node mode
> on my localhost.
>
>
> a)  configure debug in eclipse,
>
>     under tab main:
>       project : hadoop-all
>       main-class: org.apache.hadoop.util.RunJar
>
>    under tab arguments:
>
>    program arguments: <absolute path for wordcount jar file>/wordcount.jar
>  org.wordcount.WordCount   <input-text-file-already-in-hdfs> (text)
>  <desired-output-file> (output)
>    VM arguments: -Xmx256M
>
>
>   under tab classpath:
>
>   user entries :  add external jar  ( hadoop-0.20.3-core-dev.jar ) ==> so
> that I can debug my new built hadoop core jar.
>
>
> under tab source:
>
>  I add the source file folder for the wordcount example ( in order lookup
> for the debug process).
>
>
> I apply these configuration and start debug process.
>
>
> b)  the debugging works fine, and i can perform all operations for debug.
> However, i get following problem
>
> 2010-07-14 00:02:15,816 WARN  conf.Configuration
> (Configuration.java:<clinit>(176)) - DEPRECATED: hadoop-site.xml found in
> the classpath. Usage of hadoop-site.xml is deprecated. Instead use
> core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of
> core-default.xml, mapred-default.xml and hdfs-default.xml respectively
> 2010-07-14 00:02:16,535 INFO  jvm.JvmMetrics (JvmMetrics.java:init(71)) -
> Initializing JVM Metrics with processName=JobTracker, sessionId=
> Exception in thread "main"
> org.apache.hadoop.mapreduce.lib.input.InvalidInputException: *Input path
> does not exist: file:/home/hadoop/code/hadoop-0.20.2/text*
>  at
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:224)
> at
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:241)
>  at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:885)
> at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:779)
>  at org.apache.hadoop.mapreduce.Job.submit(Job.java:432)
> at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:447)
>  at org.selfadjust.wordcount.WordCount.run(WordCount.java:32)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>  at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
> at org.selfadjust.wordcount.WordCount.main(WordCount.java:43)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>  at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
>  at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>
>
> However, the file named "text" is the file already stored in the hdfs.
>
>
> Could you please help me with debugging process here, any pointers to the
> debugging environment would be very helpful.
>
>
> thanks,
>
> --PB
>
>
>

Reply via email to