Hi Dean,

Try removing the fs.default.name parameter from hdfs-site.xml and put it in
core-site.xml


On Wed, Dec 8, 2010 at 2:46 PM, Hiller, Dean (Contractor) <
dean.hil...@broadridge.com> wrote:

> I run the following wordcount example(my hadoop shell seems to always hit
> the local file system first so I had to add the hdfs…is that normal??...I
> mean, I see it printing configDir= which is where I moved the config dir and
> what I set the env var too which has the location in the config files there
> but it still hits the local).
>
>
>
> [r...@localhost hadoop]# ./bin/hadoop jar hadoop-0.20.2-examples.jar
> wordcount
>
> hdfs://206.88.43.8:54310/wordcount hdfs://
> 206.88.43.168:54310/wordcount-out
>
>
>
> configDir=/mnt/mucho/hadoop-config/
>
>
> classpath=/opt/hbase-install/hbase/hbase-0.20.6.jar:/opt/hbase-install/hbase/hba
>
>
> se-0.20.6-test.jar:/mnt/mucho/hbase-config/:/opt/hbase-install/hbase/lib/zookeep
>
> er-3.2.2.jar
>
> 10/12/08 08:42:33 INFO input.FileInputFormat: Total input paths to process
> : 13
>
> org.apache.hadoop.ipc.RemoteException: java.io.FileNotFoundException: File
> file:
>
> /tmp/hadoop-root/mapred/system/job_201012080654_0010/job.xml does not
> exist.
>
>         at
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSys
>
> tem.java:361)
>
>         at
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.
>
> java:245)
>
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:192)
>
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:142)
>
>         at
> org.apache.hadoop.fs.LocalFileSystem.copyToLocalFile(LocalFileSystem.
>
> java:61)
>
>         at
> org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1197)
>
>
>
>         at
> org.apache.hadoop.mapred.JobInProgress.<init>(JobInProgress.java:257)
>
>
>
>         at
> org.apache.hadoop.mapred.JobInProgress.<init>(JobInProgress.java:234)
>
>
>
>         at
> org.apache.hadoop.mapred.JobTracker.submitJob(JobTracker.java:2993)
>
>
>
> In case it helps, here is my hdfs-site.xml that is used by the daemons
> started AND the client(is that an issue…using the same one)…
>
> <configuration>
>
> <property>
>
>   <name>fs.default.name</name>
>
>   <value>hdfs://206.88.43.168:54310</value>
>
> </property>
>
> <property>
>
>   <name>hadoop.tmp.dir</name>
>
>   <value>/opt/data/hadooptmp</value>
>
> </property>
>
> <property>
>
>   <name>dfs.data.dir</name>
>
>   <value>/opt/data/hadoop</value>
>
> </property>
>
> <property>
>
>   <name>dfs.replication</name>
>
>   <value>2</value>
>
> </property>
>
> </configuration>
>
>
>
>
>
> This message and any attachments are intended only for the use of the 
> addressee and
> may contain information that is privileged and confidential. If the reader of 
> the
> message is not the intended recipient or an authorized representative of the
> intended recipient, you are hereby notified that any dissemination of this
> communication is strictly prohibited. If you have received this communication 
> in
> error, please notify us immediately by e-mail and delete the message and any
> attachments from your system.
>
>
>

Reply via email to