Yura Taras wrote:
Unfortunately, setting mapred.child.tmp doesn't help. Could you share
your sample config files?
What about VMWare - I am thinking about this as a last resort :)

On Thu, Jan 28, 2010 at 3:41 PM, Yang Li <liy...@cn.ibm.com> wrote:
I met the same problem on WinXP+Cygwin and fixed it by
either:
- moving to a linux box (VMWare works very well)
or:
- configuring a "mapred.child.tmp" parameter in core-site.xml

I cannot explain why and how "mapred.child.tmp" is related to the problem.
From source code, it seems to be a JVM issue on windows. Hope this helps.

Best regards,
-----------------------
Li Yang, ext. 22056, User Technologies Development, Shanghai, China




Yura Taras <yura.ta...@gmail.com>
2010-01-28 00:41
Please respond to
common-user@hadoop.apache.org


To
common-user@hadoop.apache.org
cc

Subject
Failed to install Hadoop on WinXP






Hi all
I'm trying to deploy pseudo-distributed cluster on my devbox which
runs under WinXP. I did following steps:
1. Installed cygwin with ssh, configured ssh
2. Downloaded hadoop and extracted it, set JAVA_HOME and HADOOP_HOME
env vars (I made a symlink to java home, so it don't contain spaces)
3. Adjusted conf/hadoop-env.sh to point to correct JAVA_HOME
4. Adjusted conf files to following values:
  * core-site.xml:
<configuration>
   <property>
     <name>hadoop.tmp.dir</name>
     <value>/hdfs/hadoop</value>
     <description>A base for other temporary directories.</description>
   </property>
   <property>
       <name>fs.default.name</name>
       <value>hdfs://localhost:8888</value>
   </property>
</configuration>

 * hdfs-site.xml:
<configuration>
 <property>
   <name>dfs.replication</name>
   <value>1</value>
 </property>
</configuration>

 * mapred-site.xml:
<configuration>
 <property>
   <name>mapred.job.tracker</name>
   <value>localhost:9999</value>
 </property>
</configuration>

5. Next I execute following line: $ bin/hadoop namenode -format &&
bin/start-all.sh && bin/hadoop fs -put conf input && bin/hadoop jar
hadoop-0.20.1-examples.jar grep input output 'dfs[a-z.]'

I receive following exception:
localhost: starting tasktracker, logging to
/home/ytaras/hadoop/bin/../logs/hadoop-ytaras-tasktracker-bueno.out
10/01/27 18:23:55 INFO mapred.FileInputFormat: Total input paths to
process : 13
10/01/27 18:23:56 INFO mapred.JobClient: Running job:
job_201001271823_0001
10/01/27 18:23:57 INFO mapred.JobClient:  map 0% reduce 0%
10/01/27 18:24:09 INFO mapred.JobClient: Task Id :
attempt_201001271823_0001_m_000014_0, Status : FAILED
java.io.FileNotFoundException: File
D:/hdfs/hadoop/mapred/local/taskTracker/jobcache/job_201001271823_0001/attempt_201001271823_0001_m_000014_0/work/tmp
does not exist.
       at
org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:361)
       at
org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:245)
       at
org.apache.hadoop.mapred.TaskRunner.setupWorkDir(TaskRunner.java:519)
       at org.apache.hadoop.mapred.Child.main(Child.java:155)

!! SKIP - above exception few times !!

10/01/27 18:24:51 INFO mapred.JobClient: Job complete:
job_201001271823_0001
10/01/27 18:24:51 INFO mapred.JobClient: Counters: 0
java.io.IOException: Job failed!
       at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1252)
       at org.apache.hadoop.examples.Grep.run(Grep.java:69)
       at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
       at org.apache.hadoop.examples.Grep.main(Grep.java:93)
       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
       at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
       at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
       at java.lang.reflect.Method.invoke(Method.java:597)
       at
org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
       at
org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
       at
org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
       at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
       at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
       at java.lang.reflect.Method.invoke(Method.java:597)
       at org.apache.hadoop.util.RunJar.main(RunJar.java:156)



Am I doing something wrong (don't say just 'use Linux' :-) )?
Thanks


I've run into these problems too. I think you'll find this interesting: Karmasphere Studio for Hadoop. http://www.hadoopstudio.org/ although I haven't fully tested it myself

Brian

Reply via email to